4 Can algorithms make aesthetic judgments? With Norma Musih Recommendation engines: Automating aesthetic judgment Aesthetic judgment is a seemingly simple quest of valuating beauty in nature or in culture, of saying "this flower is beautiful", or "this is a good movie". It is a seemingly very personal and unreasoned decree; claims for beauty need not adhere to normative dictates, and they need no reasoning. Very much like love. Nevertheless, with modernity, a more complex view of aesthetic judgment emerged, epitomized in Immanuel Kant's Critique of Judgment. His view of aesthetic judgment was further developed, as well as politicized by Hannah Arendt. Following Kant, Arendt sees aesthetic judgment as both personal and social, requiring no excuses but begging for justifications, and as communal, communicative, and political. We wish to take this line of inquiry further by pondering what happens to aesthetic judgment in a culture populated by recommendation engines. We therefore suggest the need to critically examine the ramifications of recommendation engines - as data-processing algorithms -neither just in terms of the biases they create (Crawford, 2016; Ferguson, 2017; Gillespie, 2012a, 2012b; Mayer-Schonber & Cukier, 2013) and which are hard to discern because of their opacity (Pasquale, 2015); nor merely because of their tendency to create a filter bubble (Pariser, 2012; Turow, 2011), or their inherent undermining of privacy (Dijck Van, 2014; Fuchs, 2011; Grosser, 2017; Hildebrandt, 2019; Kennedy & Moss, 2015). Rather we seek to highlight how recommendation engines change the very meaning of culture (Anderson, 2013; Bail, 2014; Gillespie, 2016; Hallinan & Striphas, 2014; Striphas, 2015). Furthermore, by doing so they also undermine our freedom by excluding a particular faculty of human subjectivity: making an aesthetic judgment. We seek, then, to understand recommendation engines as automating aesthetic judgments. DOI: 10.4324/9781003196563-5 68 Aesthetic judgments We proceed by introducing the central role of recommendation engines in contemporary culture. While corporate, professional, and popular discourse highlights the objective, data-driven, mathematical nature of algorithms, we hypothesize that underlying the technological work of recommendation engines are also ontological assumptions about the nature of aesthetic judgment. Based on an analysis of public discourse on recommendation engines in Amazon and Netflix, we discern two prominent ontological assumptions, asserting aesthetic judgment as objective and individualistic. In the following sections, we position algorithmic recommendation engines within the realm of the discourse on culture, rather than merely as technical devices, and offer a critique of their assumptions about aesthetic judgment by referring to Arendt's work. Such a discussion stresses the particularity of the cultural assumptions underlying recommendation engines, rather than their universality, and helps highlight the political implications of the algorithmic conception of culture. Algorithms in culture: Between technical neutrality and political worldview In recent years, culture has become increasingly mediated by algorithmic devices that organize, prioritize, and curate cultural content for users, based on data, derived from their interaction with digital platforms. Recommendation engines epitomize this algorithmization of culture (Carah & Angus, 2018), changing how individuals encounter cultural artifacts, such as books and films. That culture is mediated by cultural agents is not new. The cultural field has always been populated by multiple intermediators - for example, critics, gatekeepers, and curators - who engage in the delicate craft of highlighting cultural artifacts worthy of our attention. These cultural mediators recommend cultural artifacts based on their artistic quality, social significance, or relevance to readers. As culture increasingly takes place online, and as using it produces a plethora of data, digital platforms now render them into personalizes recommendation and become key intermediator, or "cultural m/bmediaries", as Morris aptly calls them, "increasingly responsible for shaping how audiences encounter and experience cultural content" (Morris, 2015). Recommendation engines have become consequential in determining our cultural intake in recent years, suggesting which videos to watch (YouTube), what songs to listen to (Spotify), and what posts to read (Facebook). The shift from established cultural intermediaries to algorithms introduces new logics to intermediation (Morris, 2015). We focus our empirical gaze on Aesthetic judgments 69 Amazon and Netflix as they have become leading commercial distributors of cultural artifacts worldwide, and so their recommendation engines now play a central role in shaping culture. In professional and popular discourse, the logic behind recommendation engines is seen as fairly transparent and straightforward. With access to a treasure trove of personal data about users' previous cultural choices, it seems plausible to be able to assess their cultural taste and make successful recommendations. It is indeed a maxim of dataism to consider data as representations of reality, and their mathematical manipulation as allowing the creation of objective knowledge (Dijck Van, 2014). However, a decade and a half of social research has taught us that algorithms are far from being abstract, technical, mathematical, and hence objective systems. Rather, they are imbued with social, ideological, and cultural presuppositions (Crawford, 2016; Gillespie, 2014, 2016; Pariser, 2012; Pasquale, 2015). One way that social research on algorithms has sought to shed light on the politics of algorithmic systems is by attending to the ideological assumptions embedded in algorithms as an idea and a social process. Thomas and colleagues propose to examine algorithms as fetish -"social contracts in material form" - in order to unveil the "emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge" (Thomas et al., 2018). And Beer urges us to go beyond algorithms as technical and material in order to "explore how the notion or concept of the algorithm is also an important feature of their potential power" (Beer, 2017). We answer this call by asking how the discourse on recommendation engines is "evoked as a part of broader rationalities and ways of seeing the world" (Beer, 2017), or seeing culture, in our case. Recommendation engines should therefore be examined as an idea that promotes "certain visions of calculative objectivity and also in relation to the wider governmentalities that this concept might be used to open up" (Beer, 2017). Following this approach, then, we ask "what kinds of politics do [algorithms] instantiate?" (Crawford, 2016). The ways by which recommendation engines picture the world through data represent, we argue, a particular worldview, which has ramifications for culture. We therefore reject the assumption that algorithms merely mathematically translate numeric data into knowledge, and are, therefore, indifferent to political, social, or normative concerns. In doing so, we join the longstanding social research into algorithms, which seeks to expose their worldview. Rather than suggesting that algorithms distort reality, as the notion of bias suggests (e.g., Noble, 2019), we use the notion of 70 Aesthetic judgments worldview, which requires no presuppositions about reality. We therefore see recommendation engines as operating based on a particular notion of truth (i.e., a discourse) about aesthetic judgment, which is not universal but particular and hence ideological. Recommendation engines do not merely mediate culture but also change what culture means (Hallinan & Striphas, 2014). Being performative, they change the object they assume to measure - aesthetic judgment, in this case. Striking examples come from new data-based "aesthetic" categories created by Amazon, such as "Most-Wished-For books on Amazon.com", "Books Rated 4.8 Stars or Above", and "Page turners: books Kindle readers finish in three days or less". But the most paradigmatic of these new aesthetic categories is arguably "recommended for you", which offers personalized curation. These new categories of aesthetic judgment have stirred a vibrant public discussion. We unpack the public discussion about Amazon's and Netflix's recommendation engines through the analysis of articles published in major media outlets in the last decade. Aesthetic judgment and recommendation engines The link between culture and algorithms in general, and recommendation engines in particular, has become a topic of concern for public discourse in the last decade. At the most fundamental level, the rise of recommendation engines has meant that data has become central to the cultural field. The ability to automate and personalize recommendations requires a data-saturated media ecology. Making sense of data requires an abundance of data; the more varied the data are the better recommendation engines are able to identify and characterize users. This has made digital platforms "data-hungry". For Amazon, which first integrated data and algorithms into its book recommendations (Economist, 2019), this hunger for data affects its approach to books' retailing. When Amazon launched its own digital reader, Fire, in 2014, it saw it "less as a communication device than an ingenious shopping platform and a way of gathering data about people in order to make even more accurate product recommendations" (Economist, 2014). The Amazon reader, then, was less of a consumer product, and more a means of producing data in the "assembly line" of personalized recommendations. Personalized recommendations, based on users' individual online behavior, account for 35% of Amazon's sales (Yek, 2017), and for 80% of the content viewed on Netflix (Chhabra, 2017). Figuring out users' tastes and likes - that is, predicting their aesthetic judgment - is a Aesthetic judgments 71 difficult task methodologically and technologically speaking. But perhaps more fundamentally, the question of what people want is of philosophical, psychological, and sociological nature. Algorithmically predicting aesthetic judgment, then, brings up not only methodological questions about how Amazon and Netflix know what we want, but also ontological questions about what it means to want. Put differently, the question is what aesthetic judgment entails in a digital culture. The discourse on recommendation engines helps us disclose the answer to this question. Our empirical inquiry reveals two dominant assumptions about aesthetic judgment underlying recommendation engines: an assumption concerning the objective nature of aesthetic judgment, and an assumption concerning its individualist nature. / Aesthetic judgment as objective One assumption underlying recommendation engines is that aesthetic judgment is an event that can be grasped objectively, with no recourse to subjectivity. The availability of quality, variety, and quantity of data allows an outside spectator (or a recommendation engine, in this case) to characterize with high confidence one's cultural taste, and predict their preferences. Quality, in this case, pertains to data as a good proxy for real-world behavior; variety and large quantity are needed in order detect patterned behavior in the absence of theoretical hypotheses about relations between variables. The idea of aesthetic judgment as objective is not new and is perhaps mostly epitomized by the work of Pierre Bourdieu. Bourdieu's theory of cultural taste (1979 [1984]) has sprouted a cottage industry of studies, which has proven the high correlation of cultural taste with socio-economic indicators. Commercial mass media have implemented his theory most prominently through the practice of segmentation - offering distinct cultural artifacts to distinct social categories, easily recognized and measured. Recommendation engines uphold this objectivist assumption, giving it a digital boost. An early journalistic account of recommendation engines brings to the fore the question of aesthetic judgment as an objective reality to be discovered by algorithms. The title, "How they know what you like before you do" (Moser, 2006) evokes the enigma of a predictive technology, which excludes subjectivity from the process of judging. This ability of recommendation engines assumes that aesthetic judgment needs not involve a conscious act of free will. Recommendation engines are assumed to be devices that tap an objective and already-existing reality: taste. The notion of aesthetic judgment as an objective 72 Aesthetic judgments reality that can be gauged from data is articulated in Netflix's CEO vision, mentioned at the beginning of this book, that "one day ... we're able to show you exactly the right film or TV show for your mood" (Economist, 2019), a vision that assumes that judgment can be gauged without people's direct, conscious involvement. The objectivist assumption to aesthetic judgment underlying recommendation engines is criticized by observers for ignoring the social coordinates of this technology. Zeynep Tufekci, a Wired columnist and media researcher, argues that algorithms make an aesthetic judgment for you, not with you, and are therefore promoting a mode of non-communicative knowledge. Recommendation engines, she says, reify culture, thus render aesthetic judgment into a derivative of an objective social structure. She points out a few concrete computational practices by which this takes place. One is making recommendations based on similar individuals: Behind every "people like you" recommendation is a computational method for distilling stereotypes through data. Even when these methods work, they can help entrench the stereotypes they're mobilizing. They might easily recommend books about coding to boys and books about fashion to girls (Tufekci, 2019). Recommendation engines rely on prejudgments concerning what makes people alike. Such theory of taste - shared by both marketing professionals and reductionist applications of Bourdieu - renders subjective judgment redundant. It makes an air-tight correlation between social location and taste. While this can be shown to be objectively valid at a given point - that is, can be proven by quantitative empirical research - it also rules out the possibility for aesthetic judgment as an expression of subjectivity. This computational practice, then, objectifies taste and judgment. Another computational practice of recommendation engines, creating yet another form of bias, concerns popularity. Algorithmic recommendations are influenced by identifying "trends" and prioritizing them. They "filter out common terms as background noise and highlight those that have acceleration and velocity on their side. This definition of trending buries ongoing conversations and amplifies sensational, new things" (Tufekci, 2019). Tufekci, then, reminds us that seemingly technical terms used to make recommendations - such as "people like you", or "trending" - are nonetheless socially constructed and hence political. More specifically, these seemingly Aesthetic judgments 73 neutral indicators of taste and judgment carry with them a priori assumptions about what taste and judgment are. Some commentators find troubling the idea that recommendation engines seem to reflect who we are. This would suggest that recommendation engines create "a digital extension of ourselves" (Satola, 2018) over which we have no agency. With automated recommendations, it becomes harder to clearly demarcate the boundaries of a self, which has autonomous will and intentions: "Blurred lines now exist between our own original thoughts about what we might like and what an algorithm decides for us" (Satola, 2018). To the extent that the role of subjectivity in the process of aesthetic judgment is demoted, and to the extent that aesthetic judgment can be seen as an effect of objective causes, it is also more susceptible for external manipulations. Hence, recommendation engines can be seen as forming taste, not merely gauging it. An early journalistic account about the music service Pandora explains that it does not merely "connect listeners with all kinds of music"; rather, "the website's personalized music recommendations have sparked new listening habits" for users (Moser, 2006). The objec-tivist assumption means that the lines between deciphering a user's aesthetic judgment and influencing it become blurred. Personalization entails not only which movies or books are recommended, but also how they are recommended. Users are assumed to be different not only in terms of their taste but also in how their taste can be solicited. Netflix, for example, may offer the same show differently to different users, that is, appealing to different aspects of their aesthetic judgment. Its algorithms personalize "how shows are presented to you" by using "different image tiles ... to entice different users" (Clarke, 2019). This procedure is done by creating objective aesthetic categories. A Netflix executive explains: "We break down a show into multiple themes, and then we create artwork to fall into all of those themes", she says. This means that each show has a number of potential tile images each user may be shown ... as people start to watch the content, Netflix's massive trove of data kicks in to inform who sees what (Clarke, 2019). A tile may highlight the romantic plot for one user and the suspenseful plot for another, "or maybe all the tiles on your account will be of the female characters in a show, while [for others] flits between images of food and key props" (Clarke, 2019). 74 Aesthetic judgments The objectification of aesthetic judgment by recommendation engines, with the resulting relegation of subjectivity, is seen by Amazon and Netflix as a solution to the problem of overabundance of choice, which cannot anymore be handled individually by users. "The beauty of having a virtual library is you have control and choice, but with a lot of choice, you can be overwhelmed", and so personalized recommendations are there to help you overcome this "choice paralysis" (Clarke, 2019). But relegating subjectivity from the process of judement is also perceived as problematic, even by Amazon and Netflix. Their recommendation engines need to strike a balance between determining users' choices and letting users retain a sense of control. The Netflix executive explains: "We're trying to navigate within that tension of making it easy and showing them the right information so they can understand what they want to watch, but not be overly invasive" (Clarke, 2019). Users' may be willing to get recommendations from algorithms, but their sense of autonomy should nevertheless be retained. The relegation of subjectivity does not go unnoticed by users as well. "I look at my algorithm-generated 'Recommendations for Lizzie'" writes a columnist, "and I don't like that person - or the control involved in the process ... Netflix, in all its machine-learned wisdom, appears to know me better than myself (O'shea, 2018). The columnist resists the idea that recommendation engines control her cultural horizons. Netflix may be right to think she likes romantic comedies, but she also does not want her cultural diet to consist only of them: Watching only my Netflix recommendations would be like using the internet only to look at cat pictures: reasonable on one level, but you would undeniably also miss out on some interesting stuff (O'shea, 2018). The columnist concludes with a call to reassert subjectivity vis-á-vis algorithms: "Just as we should resist outsourcing our ethical decisions to machines, we should not allow them to make cultural ones for us either" (O'shea, 2018). This critique is echoed by another columnist who attempts to assert his subjectivity by manipulating the algorithm: choosing shows he does not actually watch "in hopes of having my preferences changed" (Beeber, 2019). A similar tone of critique of the objectification of taste is reiterated by The Guardian asking readers to report their "weirdest Netflix recommendations" (Lee, 2015), following a case where viewers looking for a film to watch in the vein of teen comedy The Inbetweeners Movie, reported Netflix suggested the holocaust drama The Boy in the Striped Pyjamas. Aesthetic judgments 75 // Aesthetic judgment as individualistic The second assumption concerning recommendation engines is that aesthetic judgment is individualistic. Underlying it is a model of culture as an assortment of artifacts from which individuals can pick. This individualistic assumption circumvents the role of inter-subjectivity in the formation of culture. While culture can be seen as individualistic, that is, as dyadic relations between individuals and culture, it can also be understood (as we will expand in the next section) as a social sphere constituted among individuals and based not only on consumption but on communication as well. The inability of recommendation engines to access such communicative inter-subjectivity does not go unnoticed by observers, neither even by the digital platforms. An article in The Atlantic raises the conundrum why Amazon bought Goodreads - "a social network for book nerds with a devoted but far from enormous 16 million members" - for $150 million? The intuitive hypothesis would be that Amazon was after "a vast trove of data on Goodreads members" (Weissmann, 2013), data that would then be algorithmically analyzed in order to automate recommendations. This hypothesis is compatible with the objectivist assumption outlined earlier. The article, however, suggests a counter-hypothesis: it is the failure of algorithmic analysis of data, which led to the acquisition. Amazon noticed recommendation engines fail among avid readers; within this group, the power of recommendations lies still in personal interactions with other readers. Avid readers, 20% of the population who read 80% of books, now rely more on "personal recommendations from people they know" (Weissmann, 2013), received mostly through social media. This serves a blow to the algorithmic model: "What they're not relying on much more heavily are recommendation engines" (Weissmann, 2013). Amazon, then, acknowledges that recommendation engines ignore the communal and inter-subjective nature of culture, and therefore fail to produce good recommendations for avid readers. The article hypothesizes, then, that Amazon has bought a very old-fashioned technology of a vibrant literary universe in order to "transmit the recommendations of prolific readers to the average reader" (Weissmann, 2013). Since "11 percent of book buyers make about 46 percent of recommendations" (Weissmann, 2013), the cultural conversation that takes place on Goodreads is valuable for Amazon. The imperfect ability of recommendation engines to gauge aesthetic judgment, mentioned at the end of the previous section, can also result from their neglect of the communal and discursive nature of culture. 76 Aesthetic judgments While culture carries moral, normative, and political undertones, these are overlooked by algorithms; algorithms deal with data, which serve as proxy for culture, not with culture per se. An article in Wired points to the biases that this agnostic approach yields. "Curation algorithms", the article argues, "are largely amoral. They're engineered to show us things we are statistically likely to want to see, content that people similar to us have found engaging - even if it's stuff that's factually unreliable or potentially harmful" (Diresta, 2019). For example, anti-vaccine books have topped Amazon's Best Sellers in "categories ranging from Emergency Pediatrics to History of Medicine to Chemistry" (Diresta, 2019). The reason is inherent to the operation of algorithms, which the article laments: "recommendation algorithms can be gamed to make fringe ideas appear mainstream" (Diresta, 2019). Recommendation engines carry an assumption of methodological and epistemological individualism (Hayek, 1942; Weber, 1978); they are geared toward producing knowledge about individuals and for individuals. Within this framework, "culture" is reined. One indication for that is the terminology of prediction, discovery, and serendipity, which prevails the discourse on recommendation engines (McCarthy, 2017). These terms reveal the presumed type of relationship between humans and machines underlying recommendation engines. Prediction assumes that given enough relevant data, recommendation engines can find out which cultural artifacts an individual may like to consume. Prediction is presumably descriptive, assuming to describe an event that will take place in the future with some probability. In the case of recommendation engines, however, prediction is also performative; its very existence is geared toward changing the probability of an event to occur. Amazon does not predict that someone may buy an item; instead it seeks to mobilize this "prediction" in order to increase the probability that she will. Discovery refers to a desired characteristic of recommendation engines - their ability to break the closed-circuit feedback loop, which would recommend users "more of the same" and create a filter bubble around them. To overcome this problem, recommendation engines strive to mimic the real-world experience of discovery, by programing serendipity into algorithms, which will allow users to happily run into new and surprising cultural artifacts as if they were in a second-hand indie bookshop. Prediction, on the one hand, and discovery and serendipity, on the other hand, are quite contradictory, representing two poles of recommendation engines. Indeed, much of the discourse on Aesthetic judgments 77 recommendation engines, propagated by Netflix and Amazon, concerns the need to balance between giving people what they want (the prediction pole) and surprising them with new cultural artifacts (the disco very/serendipity pole). A Netflix executive explains: The average consumer is going to look at 40 to 50 titles to make their choice, so we have to put the right 40 to 50 titles in front of them without falling into the filter bubble. We have to make sure there's diversity and serendipity in those, and we have to use the signals of what they've told us before (Clarke, 2019). A recurrent normative argument for the discovery potential of recommendation engines is the democratization of cultural taste. Recommendation engines expose users with low cultural capital to cultural artifacts they would not have accessed otherwise. This may be seen as a second coming of Walter Benjamin's insistence on the liberating potential of mechanical reproduction (Benjamin, 1969). This promise is critically explored in The New Yorker article on the fine-art department in Amazon: Amazon does not seem particularly interested in recommending art that subverts expectations or disturbs the comfortable. In fact, Amazon's model of personalized recommendation and mass appeal explicitly undermines the possibility of discovery that art dealers compete to offer. Instead, the store's "window displays" exhibit what its semi-automated gallerists think users want to see: famous names, bargain prices, and kitsch by the yard ... Discovery is left to the experts, and the hegemony of high culture, far from being undermined by Amazon, is reinforced (Mauk, 2013). The article shoots three arrows of critique at recommendation engines, arguing, first, that they recommend art that's banal and comfortable rather than subversive and new; second, that they cannot compete with a human curator; and hence, third, that they do not actually democratize taste. Instead, recommendation engines flatter users, refrain from challenging them, and as a result fortify, rather than subvert traditionalist, consensual art. This image of culture as a stagnate, anti-communicative sphere, promoted by algorithms, is compared in the article with an idealized image of what the Web could have done for culture, and for discovery in culture: 78 Aesthetic judgments Benjamin's arcades and their related concepts ... have been linked to the Internet before, with writers saying that in the early days of the Web one could idly wander, flaneur-like, through virtual spaces. But, just as arcades were replaced by the efficient shopping experience of the department store ... the Wild West of the early Web has been replaced by a thoroughly organized virtual space (Mauk, 2013). The Web, which could have potentially led to more exploration and discovery in culture, became overly controlled by algorithms that undermine it as a communicative space. This romantic-humanist critique of recommendation engines is reiterated by many commentators, lamenting the narrowing-down of engagement with culture due to the individualization that recommendation engines promote. One article compares the experience of searching for books online with that of searching for them in a bookstore: There is something special about walking into a bookstore and exploring the collection ... if it is my first time there, I tend to follow a similar path through the store to get myself acquainted. First, I float toward the literature section and walk across the wall from A to Z, scouring through the names of authors both familiar and unknown. Then, my eyes wander to the history and philosophy sections, where I can usually find esoteric titles that sometimes hint more at the tastes of the bookstore employees than the interests of their customers (Satola, 2018). Such wild and unexpected discovery stands "in stark contrast to the clickbait world of the internet" (Satola, 2018). Amazon's recommendation algorithms "are destroying the humanistic side of reading and how we share books with others", that is, destroying the communality of culture. Whereas our stroll through a bookstore is described as a process of discovery, where we can at least get a glimpse of an actually existing "culture" as a social phenomenon, "the Amazon algorithm is set up so that you only see what the site wants you to see ... [which] reinforces your current tastes and opinions" (Satola, 2018). While the former expands your horizons and makes you face culture-at-large, the latter narrows it and blinds you from the social and communal character of culture: "What algorithms take away from the modern reading experience is its crucial interpersonal dimensions" (Satola, 2018). Aesthetic judgments 79 A few commentators see in recommendation engines as offering a new means for making culture communal and inter-subjective, rather than individualistic. By connecting interrelating personal data from different users of a music service, algorithms are able to facilitate social communication among them, making '"music discovery' a social activity" (Moser, 2006). This, according to a study cited in the article, will lead to a democratization of taste: Instead of primarily disc jockeys and music videos shaping how we view music, we have a greater opportunity to hear from each other... These tools allow people to play a greater role in shaping culture, which, in turn, shapes themselves (Moser, 2006). The study found that 58% of participants reported being exposed to "a wider variety of music since using any online music service" (Moser, 2006). This view upholds recommendation engines' role in revitalizing culture: "People are so hungry to get reconnected with [new] music" (Moser, 2006). Netflix's recommendation engine is also interpreted in the same light: the more than one billion ratings contributed by customers on its site (as early as 2006), rendered into algorithmic recommendations, accounts for 60% of the movies rented. The article refers to these algorithmic recommendations as "community-driven" (Moser, 2006), suggesting that it reflects a kind of social communication about culture. But, are relations among data points really a form of social communication? An article delving on this question distinguishes between our true self (which underlies our taste) and our algorithmic self (which underlies how algorithms interpret our taste). Netflix, the article argues, "can find out what you like, but it can't read your mind" (Ditum, 2019). For algorithms, "there is no 'you'" (Ditum, 2019), only how you act online. The article distinguishes between what might be called an intersubjective assessment of someone's taste and an algorithmic assessment. A friend giving us an accurate recommendation for a book is a proof that she really knows us, since "in the usual version of ourselves, taste is at the center" (Ditum, 2019). With algorithmic judgment of taste, however, we enter a new realm where this very ontology of selfhood is denied: When it comes to Netflix, I simply don't exist. There's a general assumption that a service such as Netflix must be profiling you ... But that's not how Netflix works. All it knows is what you watch, 80 Aesthetic judgments and what other people who watched those things also watched. Even the word "people" in that sentence is arguably out of place: there are no people in the Netflix algorithm, only relationships between shows and movies (Ditum, 2019). This can be read as a humanist critique of a post-humanist cultural field, where mathematical connections between objects take over in-tersubjective relations mediated by natural language. The article exemplifies the difference between mathematical and natural language with the category of "race", integral to natural, intersubjective language in American society. When black Netflix viewers were served "thumbnails that highlighted black actors who were bit-players rather than the stars of a movie", they assumed the service knew their race and attempted to lure them. But that, according to the article, is not what happened. Netflix was serving them these thumbnails "not because it had any clue who the users were, only because it knew that they had previously watched shows advertised with thumbnails of black actors" (Ditum, 2019). Put differently, a user is not assumed to be black by the recommendation engine, but rather only to feature a particular data pattern (which in natural language we might describe as black). The divergence of mathematical language from natural language is evident in how recommendation engines "talk" about culture. Cultural forms constructed by recommendation engines do not necessary coincide with those formed among people. A good example for that are genres. Artistic genres, such as tragedy, comedy, farce, or drama, are longstanding and, with some variations over time, still constitute a common language to discuss culture. A genre possesses its own formal structure, logic, and tropes. As aesthetic categories, genres allow communication among different actors; they allow making an otherwise idiosyncratic artwork a public matter. For algorithms, however, these socially and historically constructed categories are problematic as a basis for recommendations. Netflix finds genres to be too broad to help users find new content. Why settle for "drama" when you can have 'imaginative time travel movies from the 1980s' instead? Changing the way titles are categorized by becoming much more specific helps Netflix recommend quirky shows and movies that users may not find otherwise (Mcconnell, 2017). Aesthetic judgments 81 Note the different conceptualization of genres assumed and promoted by recommendation engines. Rather than as discursive categories constructed in a public sphere, genres are perceived and acted upon by algorithms as behavioral categories. The more cultural taste is personalized by algorithms and the more it is objectified, so it is driven to be fragmented into micro-genres, which may be meaningless publicly. A Netflix executive explains: We can tell you how much violence or sex it has, does it have a dark ending or a happy ending ... does it have a chimpanzee in it, does it have a corrupt cop, or does it have a corrupt cop who happens to be a chimpanzee (Mcconnell, 2017). One would assume that not only are such aesthetic categories not universally discussed in culture but also that the user who supposedly made a favourable aesthetic judgment about them is not aware of them. Are they even categories of aesthetic judgment at all if the one supposedly making the judgment cannot vocalize them and advocate them? Arendt's conception of aesthetic judgment and culture The discourse on aesthetic judgment and culture, which underlie recommendation engines, is not universal but particular; as such, it differs from, what we term, a modernist view of culture. Where recommendation engines assume aesthetic judgment to be objective and individualistic, the modernist view sees aesthetic judgment as demanding the active participation of subjectivity and inter-subjectivity, and upholds culture as a communal sphere of communicating agents. We take Arendt's political rendition of Kant's Critique of Judgment (2012 [1790]), as an epitome of the modernist view. By engaging Arendt's view, we seek, first, to show that there is more than one conception of aesthetic judgment, and that the worldview underlying recommendation engines is therefore particular rather universal, and second, to identify the differences between these two views of aesthetic judgment in terms of their political ramifications. Following Kant, Arendt asserts that aesthetic judgment is both social, since it relates to sensus communis, a 'community sense', and subjective, since there are no objective standards upon which it is based, that is, it does not refer to truth. This community sense is what makes humans capable of broadening their minds and thinking from 82 Aesthetic judgments the perspective of others (Degryse, 2011). Only when we are capable of thinking from other persons' standpoint are we able to communicate. We need a community in which our judgments and opinions can be vocalized and tested. Arendt therefore politicized aesthetic judgment: Making aesthetic judgments and testing them in public condition and train our capacity for political judgment. The validity of aesthetic judgment is anchored, according to Arendt, not in being objectivity truthful but in communicating subjective opinions, positioning one's judgment in relation to other people and to their common sense. Common sense "fits us into a community" (Arendt, 1992, p. 70); as it "makes us capable of thinking from the perspective of others ... it also enables us to speak to each other" (Degryse, 2011). Community sense is "what judgment appeals to in everyone, and it is this possible appeal that gives judgments their special validity" (Arendt, 1992, p. 72). The validity of aesthetic judgment is anchored not in positivist truths but in the communication of subjective opinions in relation to other people and to their common sense. Aesthetic judgmeent, then, is subjective and inter-subjective, and the performance of both depends on their communicability. Appeal to community sense requires us to think, even subjectively, with a communal and communicative horizon in mind. It demands that we think as if we were a person among persons: "Only if we are capable of thinking from the other person's standpoint are we able to communicate. Without this capacity, we would not be capable of speaking in such a way that another person would understand us" (Arendt, 1992, p. 74). Through speech and communication, common sense makes politics possible; moreover, it forms humans as political beings. Humans are connected to each other not only based on their needs and wants, as contract theories would have it. More important is our mental interdependence. Humans "are dependent on their fellow men not only because of their having a body and physical needs but precisely for their mental faculties" (ibid., p. 14); they are mentally interdependent and are bound to each other by a common world. Aesthetic judgment is, therefore, neither completely individual nor wholly social; rather, it is somehow both: One judges always as a member of a community, guided by one's community sense, one's sensus communis. But in the last analysis, one is a member of a world community by the sheer fact of being human; this is one's 'cosmopolitan existence'. When one judges and when one acts in political matters, one is supposed to take Aesthetic judgments 83 one's bearings from the idea, not the actuality, of being a world citizen and, therefore, also a Weltbetrachter, a world spectator (Arendt, 1992, pp. 75-76). When we judge, we always judge as members of a community, guided by what we all have in common. Thinking subjectively with the sensus communis in mind, however, is not a substitute for taking into account the actual judgments of others, for having an actual dialogue with others (Arendt, 1992, pp. 43, 71). As Arendt puts it: "Unless you can somehow communicate and expose to the test of others ... whatever you may have found out when you were alone, this faculty [i.e., enlarged thinking] exerted in solitude will disappear" (Arendt, 1992, p. 40). Communicating aesthetic judgment is, hence, part and parcel of making aesthetic judgment. "The very faculty of thinking", says Arendt, "depends on its public use; without 'the test of free and open examination', no thinking and no opinion-formation are possible. Reason is not made 'to isolate itself but to get into community with others'" (Arendt, 1992, p. 40). Degryse succinctly summarizes these two aspects of aesthetic judgment as a communal action: Our mental faculties call for others. Taking into account the possible judgments of others allows us to form judgments. But it does not stop here. We have to discuss our judgments and opinions with others in order to keep our mental faculties intact (Degryse, 2011). Conclusion Recommendation engines do not simply automate aesthetic judgment -as if leaving its essence intact - but rather change the action they set out to automate. This change is of both cultural and political significance. Culturally, recommendation engines presume aesthetic judgment to be objective and individual, thus undermining the subjective and inter-subjective character of culture. Our findings support existing research regarding the privatization and individualization of culture. Striphas warns that "what is at stake in algorithmic culture is the gradual abandonment of culture's publicness" (Striphas, 2015). In this ever-expanding private bubble "recommender algorithms ... can act as 'intimate experts', accompanying users in their self-care practices", promoting, in turn, "creative self-transformation" (Karakayali et al, 2018). Just and Latzer (2016), who see recommendation engines as a new source 84 Aesthetic judgments of reality construction, evaluate that "compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization" (Just & Latzer, 2016). And Prey underscores the performativity of recommendation engines, insisting that how they see the individual "work[s] to enact the individual on these platforms", and results in "algorithmic individuation" in the field of cultural consumption (Prey, 2018). But we propose, following Arendt, that this change is also of political significance. According to the modernist view of culture, aesthetic judgment entails action - subjective and intersubjective. Recommendation engines relegate aesthetic judgment to the machine, disposing of the agential act of judging. Arendt sees not merely an analytical homology between aesthetic judgment and political judgment, but an empirical link as well: judging aesthetically in the cultural field also serves as training for political judgment. Recommendation engines render aesthetic judgments into choices, leaving out reflections upon these choices. Behavior is taken as a proxy for reflection, and choice replaces a deliberative process of argumentation and persuasion. As Arendt insists, what matters in aesthetic judgment is not so much what we choose, but that we are involved in the process of choosing as world spectators. That is what aesthetic judgment share with political judgment. By excluding this facet from the making of aesthetic judgment, recommendation engines focus on the end-product - the recommendation itself - the validity of which is assumed to appeal to notions of truth. We have used Arendt's conception of aesthetic judgment as an epitome of a modernist ideal not in order to consecrate it against the algorithmic model, but rather for methodical and critical purposes. Methodically, this comparison compels us to understand recommendation engines as cultural intermediaries, rather than merely technical devices. Critically, this comparison, while not suggesting that one model of aesthetic judgment is more valid than the other, does help us to highlight the distinct political ramifications of each model. While the modernist model render culture a communal and communicative human endeavor, thus expanding its political horizons, the algorithmic model contracts these horizons. References Anderson, C. W. (2013). Towards a sociology of computational and algorithmic journalism. New Media and Society, 75(7), 1005-1021. Aesthetic judgments 85 Arendt, H. (1992). Lectures on Kanta's Political Philosophy. Chicago: University of Chicago Press. Bail, C. A. (2014). The cultural environment: Measuring culture with big data. Theory and Society, 43(3), 465^82. Beeber, A. (2019). "Netflix Recommendations a Bit Unnerving." The Lethbridge Herald, February 2. Beer, D. (2017). The social power of algorithms. Information Communication and Society, 20(1). 1-13. Benjamin, W. (1969). Illuminations: Essays and reflections. New York: Schocken Books. Carah, N., & Angus, D. (2018). Algorithmic brand culture: Participatory labour, machine learning and branding on social media. Media, Culture and Society. 40(2), 178-194. Chhabra, S. (2017). Netflix says 80 percent of watched content is based on algorithmic recommendations. Mobilesyrup. August 22. Clarke, A. (2019). "How Netflix Decides What You Want to Watch." The Sydney Morning Herald, April 2. Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes about living in calculated publics. Science, Technology & Human Values, 41(1), 77-92. Dijck Van, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance and Society, 12(2), 197-208. Ditum, S. (2019). Netflix doesn't know who you really are. Australian Financial Review, April 12. Diresta, R. (2019). How Amazonas Algorithms Curated a Dystopian Bookstore. Wired, March 5. Degryse, A. (2011). Sensus Communis as a Foundation for Men as Political Beings: Arendt's Reading of Kant's Critique of Judgment, Philosophy & Social Criticism, 37(3), (March 1, 2011): 345-58. Economist. (2014). How far can amazon go? The Economist. Economist. (2019, April 11). Amazon's empire rests on its low-key approach to AI. The Economist. Ferguson, A. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. New York, NY: New York University Press. Fuchs, C. (2011). An alternative view of privacy on facebook. Information, 2(1), 140-165. Gillespie, T. (2012a). Can an algorithm be wrong? Limn, 2, 21-24. Gillespie, T. (2012b). The relevance of algorithms. Culture Digitally. Gillespie, T. (2014). The relevance of algorithms. In Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot (Eds.), Media Technologies: Essays on Communication, Materiality, and Society, pp. 167-194. Gillespie, T. (2016). Trending is trending: When algorithms become culture. In Robert Seyfert and Jonathan Roberge (Eds.), Algorithmic Cultures: Essays on Meaning, Performance and New Technologies, pp. 52-75. 86 Aesthetic judgments Grosser, B. (2017). Tracing you: How transparent surveillance reveals a desire for visibility. Big Data and Society, 4(1), 1-6. Hallinan, B., & Striphas, T. (2014). Recommended for you: The Netflix prize and the production of algorithmic culture. New Media and Society, 18(1), 117-137. Hayek, F. A. (1942). Scientism and the study of society. Economica, 9(35). 267-291. Hildebrandt, M. (2019). Privacy as protection of the incomputable self: From agnostic to agonistic machine learning. Theoretical Inquiries of Law, 20(1), 83-121. Just, N., & Latzer, M. (2016). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society, 39(2), 238-258. Karakayali, N., Kostem, B., & Galip, I. (2018). Recommendation systems as technologies of the self: Algorithmic control and the formation of music taste. Theory, Culture and Society, 35(2), 3-24. Kennedy, H., & Moss, G. (2015). Known or knowing publics? Social media data mining and the question of public agency. Big Data & Society, 2(2), 205395171561114. Lee, B. (2015, July 10). What are your weirdest Netflix recommendations? The Guardian. Mauk, B. (2013). The work of art in the age of Amazon. The New Yorker. Mayer-Schonber, V., & Cukier, K. (2013). Big data: A revolution that will transform how we live, work, and think. New York: Houghton Mifflin Harcourt. McCarthy, T. (2017, May 26). Amazon's first new york bookstore blends tradition with technology. The Guardian. Mcconnell, J. (2017, September 8). Tracking TV habits; Netflix is refining recommendations for your next binge-worthy show. Financial Post. Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of taste. European Journal of Cultural Studies, 75(4-5), 446-463. Moser, K. (2006, February 16). How they know what you like before you do. Christian Science Monitor. Noble, S. U. (2019). Algorithms of oppression: How search engines reinforce racism. New York: NYU Press. O'shea, L. (2018, April 16). What kind of a person does Netflix favourites think I am? The Guardian. Pariser, E. (2012). The filter bubble: How the new personalized web is changing what we read and how we think. New York: Penguin Books. Pasquale, F. (2015). The black box society: The secret algorithmic that control money and information. Cambridge: Harvard University Press. Prey, R. (2018). Nothing personal: Algorithmic individuation on music streaming platforms. Media, Culture and Society. 40(7), 1086-1100. Satola, A. (2018, September 17). A modern reading of humanism vs algorithm. Michigan Daily. Aesthetic judgments 87 Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395^12. Thomas, S. L., Nafus, D., & Sherman, J. (2018). Algorithms as fetish: Faith and possibility in algorithmic work. Big Data and Society. January 2018, doi: 10.1177/2053951717751552 Tufekci, Z. (2019, April 22). How recommendation algorithms run the world. Wired. Turow, J. (2011). The daily you: How the new advertising industry is defining your identity and your worth. New Haven: Yale University Press. Weber, M. (1978). Economy and society. Berkeley: University of California Press. Weissmann, J. (2013, April 1). The simple reason why goodreads is so valuable to Amazon. The Atlantic. Yek, J. (2017). 5 Lessons you can learn from Amazon's recommendation engine. Disrupt by Altitude Labs. Retrieved from http://altitudelabs.com/blog/ amazon-product-recommendation-engine/