Political Communication ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/upcp20 J Routledge Taylor &. Francis Group Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign Franziska B. Keller, David Schoch, Sebastian Stier &JungHwan Yang To cite this article: Franziska B. Keller, David Schoch, Sebastian Stier & JungHwan Yang (2020) Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign, Political Communication, 37:2, 256-280, DOI: 10.1080/10584609.2019.1661888 To link to this article: https://doi.Org/10.1080/10584609.2019.1661888 Ei View supplementary material S" Ü Published online: 26 Oct 2019. or Submit your article to this journal C? I.iil Article views: 7027 View related articles □? • View Crossmark data Citing articles: 26 View citing articles G? 0 This article has been awarded the Centre for Open Science 'Open Materials' badge. Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journal lnformation?journalCode=upcp20 Political Communication, 37:256-280, 2020 Copyright © 2019 Taylor & Francis Group, LLC ISSN: 1058-4609 print / 1091-7675 online DOI: https://doi.org/10.1080/10584609.2019.1661888 13 Routledge o Taylor & Francis Gi I roup N Check for updates Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign FRANZISKA B. KELLER,© DAVID SCHOCH,© SEBASTIAN STIER,© and JUNGHWAN YANG© Political astroturfing, a centrally coordinated disinformation campaign in which participants pretend to be ordinary citizens acting independently, has the potential to influence electoral outcomes and other forms of political behavior. Yet, it is hard to evaluate the scope and effectiveness of political astroturfing without "ground truth" information, such as the verified identity of its agents and instigators. In this paper, we study the South Korean National Information Service's (NIS) disinformation campaign during the presidential election in 2012, taking advantage of a list of participating accounts published in court proceedings. Features that best distinguish these accounts from regular users in contemporaneously collected Twitter data are traces left by coordination among astroturfing agents, instead of the individual account characteristics typically used in related approaches such as social hot detection. We develop a methodology that exploits these distinct empirical patterns to identify additional likely astroturfing accounts and validate this detection strategy by analyzing their messages and current account status. However, an analysis relying on Twitter influence metrics shows that the known and suspect NIS accounts only had a limited impact on political social media discussions. By using the principal-agent framework to analyze one of the earliest revealed instances of political astroturfing, we improve on extant methodological approaches to detect disinformation campaigns and ground them more firmly in social science theory. Keywords disinformation, astroturfing, election campaign, propaganda, social media Franziska B. Keller is an Assistant Professor at the Division of Social Science of Hong Kong University of Science and Technology. She uses social network analysis to explain the inner workings of non-democratic regimes. David Schoch is a Presidential Fellow in the Department of Sociology at the University of Manchester. His work focuses on social network analysis and social media. Sebastian Stier is a Senior Researcher in the Department Computational Social Science at GESIS - Leibniz Institute for the Social Sciences in Cologne. His main research interests include political communication, comparative politics, and populism. JungHwan Yang is an Assistant Professor in the Department Communication at the University of Illinois at Urbana-Champaign. His research focuses on political communication, audience behavior, and media effects. Address correspondence to Sebastian Stier, GESIS - Leibniz Institute for the Social Sciences, Department Computational Social Science, Unter Sachsenhausen, 6-8, 50667 Cologne, Germany. E-mail: sebastian.stier@gesis.org Authors are listed alphabetically and contributed equally to this work. Color versions of one or more of the figures in the article can be found online at www. tandfonline. com/UPCP. 256 Political Astroturfing on Twitter 257 Introduction A few lone voices have long considered disinformation on social media a normative and practical problem for electoral democracies, but the broader public has only awoken to this danger after the Russian interference in the 2016 U.S. presidential election (Director of National Intelligence, 2017), a campaign that relied, among other things, on thousands of Twitter, Facebook, and Reddit accounts pretending to be ordinary U.S. citizens. This was an instance of political astroturfing -a campaign in which participants appear to be part of a genuine grassroots movement or sentiment, while it is in fact orchestrated centrally and top down (Howard, 2006; Walker, 2014). We argue that this is a common strategy of disinformation, where the disinformation pertains less to the content of the campaign - which can be completely truthful - but to the false impression of independent popular support. Our paper makes three contributions to scholarship on political disinformation. First, we present an in-depth characterization of what is the earliest instance of a coordinated online disinformation campaign with offline information to identify the accounts involved: the 2012 South Korean presidential election where the National Intelligence Service (NIS) used astroturfing to support conservative candidate Geun-hye Park. Taking advantage of court proceedings that reveal the Twitter accounts involved and the testimonials from the NIS agents who participated in the disinformation campaign, we are able to investigate how this campaign was organized. Our analysis reveals detectable coordination and activity patterns in the online behavior of 1,008 Twitter accounts controlled by NIS agents. Second, we present a theoretically informed method for the detection of a disinformation campaign. We argue that past research's predominant focus on automated accounts, famously known as "social bots" (see Howard & Kollanyi, 2016; Varol, Ferrara, Davis, Menczer, & Flammini, 2017) misses its target since reports on recent astroturfing campaigns suggest that they are often at least partially run by actual humans (so-called "cyborgs": Chu, Gianvecchio, Wang, & Jajodia, 2012), which may shield the accounts from detection strategies focused on automated behavior (Grimme, Assenmacher, & Adam, 2018). Using bot detection to study astroturfing betrays a fundamental conceptual mismatch: bots are one tool that can be used in an astroturfing campaign, but not all bots are part of it, and reversely, not all astroturfing accounts are bots. We propose an identification strategy based on coordination patterns, arguing that similar behavior among a group of managed accounts is a stronger signal of a disinformation campaign than "bot like" individual behavior. These patterns are impossible to hide entirely, because information campaigns are by definition exercises in sending coordinated messages. Our detection method allows us to identify an additional 921 suspect accounts likely to be involved in the NIS astroturfing campaign. Further qualitative examinations of the suspected accounts bolster the validity of our detection method. Our methodological approach should be transferable to other cases because principal-agent problems reduce any coordinated disinformation campaign's ability to mask such patterns. Third, we contribute to the debate on disinformation by measuring the impact of such a campaign on Twitter (Vosoughi, Roy, & Aral, 2018). Disinformation might be especially detrimental for democratic processes given the increasingly central role of social media in political communication (Bode, Hanna, Yang, & Shah, 2015; Jungherr, 2016; Stier, Bleier, Lietz, & Strohmaier, 2018; Thorson & Wells, 2016; Vaccari, 2017). We argue that the NIS campaign was, in many ways, a best-case scenario 258 Franziska B. Keller et al. for an astroturfing campaign aimed at swaying public opinion: The campaign had considerable resources and professional manpower at its disposal and targeted an unsuspecting and politically polarized audience. We assess the overall impact of the astroturfing campaign based on different measurements of user influence instead of relying on specific anecdotes. Our results show that even when aggregating the activities of known and suspect NIS accounts, commonly used metrics of online influence reveal a limited impact. We discuss these various findings against the backdrop of recent conceptual, theoretical and methodological debates on disinformation. Astroturfing on Social Media Disinformation and the Problem of Attribution Social media platforms such as Twitter and Facebook have become major venues for ordinary citizens to discuss politics, disseminate news, and organize collective action (Bennett & Segerberg, 2013; Lilleker & Koc-Michalska, 2017; Stier et al., 2018; Thorson & Wells, 2016; Vaccari, 2017). Social media also have the potential to set the news agenda (Chadwick, 2013; Neuman, Guggenheim, Jang, & Bae, 2014) as journalists pay close attention to social media buzz (Lukito et al., 2018). The increasing roles of opinion leaders and ordinary citizens in the public opinion formation process can enable new actors to frame public discourses in their favor (Bode et al., 2015). But as these interactive processes at the grassroots level have become more influential, political actors such as governments (King, Pan, & Roberts, 2017; Lukito et al., 2018), hyperpartisan media (Allcott & Gentzkow, 2017; Faris et al., 2017), far-right groups (Marwick & Lewis, 2017) or actors with economic motives (Silverman & Alexander, 2016) try to influence the public with false, misleading or exaggerated information - often called "fake news". But academics are increasingly weary of this umbrella term, because conceptual ambiguity and misuse by political actors has impeded research on the various phenomena commonly described as fake news. Hence, there is a growing consensus in the literature to distinguish multiple "information disorders", specifically disinformation - false information spread with intent to deceive - misinformation - incorrect information spread without intention to harm - and malinformation - strategic dissemination of true facts with a negative intent, such as the leaking of the emails of John Podesta, chairman of Hillary Clinton's 2016 presidential campaign (Wardle & Derakhshan, 2017, p. 20). In this paper, we study political astroturfing. Whereas a "social movement grows when people with grievances meet, agree on a common agenda, and organize for political action" (Howard, 2006, p. 86; see also Bennett & Segerberg, 2013), an astroturfing campaign tries to appear like such an organic expression of public opinion, but is actually centrally coordinated and organized - hence the analogy to the eponymous carpet "AstroTurf which tries to mimic real grass (Howard, 2006; Walker, 2014). Even before the advent of social networking sites, digital technologies have been used by lobbyists and consultants for sophisticated astroturfing campaigns coupled with micro targeting to advance political causes (Howard, 2006). On social media, astroturfing takes the shape of a centrally organized campaign in which accounts are masked as ordinary users who post opinions favorable to the instigators of such a campaign, distract others from negative news, draw attention to divisive political issues, or attack opponents and critics. The Political Astroturfing on Twitter 259 content of astroturfing may or may not be factually correct (Jackson, 2017); however, such a campaign clearly deceives the audience about the identity of its participants: they are paid or otherwise incentivized actors, not ordinary citizens. As such, astroturfing should be classified as a type of disinformation. Wardle and Derakhshan (2017) propose to study agents, messages (contents) and interpreters (i.e., the reception) of an information disorder. But without evidence that a specific account is intent on causing political harm, we cannot justify classifying its messages as disinformation. Facing this fundamental problem of attribution, characterizations of disinformation and its reception by audiences are necessarily imprecise. Because of the difficulties in identifying disinformation and the intention of a message, most research on this topic has therefore focused on the more easily identifiable automated accounts or social bots. Yet these results rest on the assumption that a particular machine learning algorithm in combination with human coding as validation reliably identifies bots (Stukal, Sanovich, Bonneau, & Tucker, 2017; Varol et al., 2017) or that automated accounts exceed an arbitrary activity threshold (Howard & Kollanyi, 2016). More importantly, this approach cannot independently verify whether flagged accounts are indeed part of the concerted effort to influence public opinion in question, and likely misses many human-manned accounts or sophisticated bots involved in such campaigns. A few recent papers have taken a more promising approach: enriching social media datasets with authentic external data on the identity and intentions of agents of online disinformation. King et al. (2017) have used leaked emails of an official responsible for government propaganda to characterize the activities and estimate the reach of the Chinese "50 cent army". In the U.S., researchers have relied on lists of Russian troll accounts to study their intervention in the 2016 presidential election (Badawy, Ferrara, & Lerman, 2018; Linvill & Warren, 2018; Lukito et al., 2018; Zannettou et al., 2018). While providing valuable data-driven insights, these studies still largely focus on anecdotes and lack a theory-driven framework for understanding the human agency behind coordinated online disinformation. Astroturfing as Centrally Organized Message Coordination Coordinated messaging is inherent to any information campaign, which by definition consists of a group of people who want to convey specific information to an audience. This is also true for genuine grassroots movements, which we define here as a movement initiated by one or several regular users, expanding organically by convincing other users of the movement's merit. Its participants also send out similar messages, but they are intrinsically motivated, and usually organized in a more decentralized fashion than agents of an astroturfing campaign (Bennett & Segerberg, 2013; Walker, 2014). The empirical distinction between the two is not straightforward, because in real life, there may be institutionalized actors helping genuine grassroots movements organize (Walker, 2014), and astroturfing campaigns may be supported or joined by genuine believers (Howard, 2006, p. 99). But because grassroots campaigns at least initially lack a central organizer, their messages should be more variegated and spread over a more extended time window in a cascading fashion. By contrast, astroturfing campaigns are initiated by a principal directly instructing a group of users who respond to extrinsic rewards - the agents. This may result in suspicious patterns, e.g., if all participants react to instructions by posting the exact same 260 Franziska B. Keller et al. message at the same time. Such an unsubtle astroturfing campaign has of course a high likelihood of being discovered. By definition, principals want their campaign to stay hidden, so their best solution is to camouflage it as a genuine grassroots movement. To better understand how the patterns of the two types of campaigns should differ, we turn to principal-agent theory (Miller, 2005; Ross, 1973). This theory is often applied in economics and business management to conceptualize the problems observed when there is an information asymmetry between the principal, a project owner, and the agent, who undertakes a task on the principal's behalf. Specifically, the principal does not know how well the agent executes the task unless the latter is constantly monitored. The theory suggests that this asymmetry becomes problematic because the goals of the principal and the agents are often not aligned. The goal of the following paragraphs is to derive theoretical expectations about the offline and online behavior patterns we should observe, but we also provide evidence from real-world astroturfing campaigns to show that our assumptions and hypotheses are grounded in empirical reality. The principals of astroturfing can either rely on automated or human agents to spread their message. Social bots should be cheaper and easier to scale up, while human-manned accounts should - at least in theory - be better in convincing the audience and pass as regular users.1 Nevertheless, being part of an information campaign, both will propagate uniform or at least similar messages over a specific time span. Unless the principal plans the campaign months or years in advance, the accounts will have to be simultaneously created or reassigned at the beginning of a campaign. Grimme et al. (2018), for instance, find that most of the accounts involved in a troll attack during a German elections TV debate were less than one month old. Being part of a campaign, the accounts will also simultaneously start and stop tweeting about similar topics, because participants of the campaign receive central instructions about the content and timing of their tweets. Cao, Yang, Yu, and Palow (2014) show that malicious Facebook and Instagram accounts upload spam photos and start following the same accounts at the same time. The astroturfing principal can try to camouflage the centralized appearance of the campaign through a variety of measures, which incur, however, at least opportunity costs: staggering messages temporally means fewer messages issued in a given time period, coming up with different phrasings requires resources or time, sharing unrelated messages risks drowning out the real message, etc. On Twitter, "message coordination" can be implemented in at least three ways: First, the accounts involved can retweet each other's messages. This helps increase the overall reach when individual accounts have different sets of followers. Second, the multiple accounts can jointly amplify messages that are congruent with the campaign goal. By co-retweeting the exact same third party message, astroturfing campaigns can boost the number of messages that fit their campaign goals. Third, different accounts managed by a single person (or a team) can tweet the same message seemingly independently -a pattern we will call co-tweeting. This type of coordination is probably most telling of an astroturfing campaign, because regular users are very unlikely to post the same message at the same time. From the principal's perspective, the costs of astroturfing arise from the primarily extrinsic motivation of the agents. Interested mainly in the external rewards, agents engage in shirking and satisficing.2 Asking them to camouflage their activity adds another layer of complexity to the classical principal-agent problem. For instance, in order to save costs, agents control dozens, if not hundreds of accounts trying to garner enough Political Astroturfing on Twitter 261 followers for these accounts. Unless the agent puts considerable energy into developing each of these "sock-puppets" into fully fledged virtual personas, those accounts will look and behave in a very similar manner. The principal can easily count the number of personas created and their messages posted, but assessing their originality requires more time and effort. Agents may therefore engage in satisficing: creating just enough low-quality accounts and messages to meet their principal's requirements instead of properly camouflaging their activity. In order to prevent this type of shirking, the principal may opt for constant and direct supervision by physically locating all agents in the same office -as done by the "Russian troll factories"3 - or at least hire them to work during office hours instead of paying by post, as the eponymous "50 cent party" allegedly is (King et al., 2017). But direct supervision also leads to specific patterns. For instance, once the agents leave work in the evening, they are unlikely to continue tweeting, unlike ordinary users. Given these costs, principals themselves may end up satisficing, and only spend enough resources to avoid detection while it matters, e.g., until the election takes place. Retroactively, we may therefore be able to detect participating accounts by their sudden inactivity, or because they collectively switch to discussing the topic of the next astroturfing campaign they have been assigned to. Automation via social bots may appear to solve many of those problems - robots do not earn salaries or engage in shirking, after all. But the fact that many bots are still identifiable by open source programs (Varol et al., 2017) and that the Russian and Chinese governments decided to employ an army of actual human beings for their astroturfing campaigns indicates that the perfect astroturfing bots either do not exist yet or are difficult to program. And even if an individual bot perfectly mimics a genuine human supporter, the army of such bots may still exhibit suspiciously coordinated behavior, because they are ultimately centrally controlled by a human being who is part of a disinformation campaign. Because of the arguments laid out above, our methodology focuses on the behavioral patterns caused by message coordination and principal-agent problems to detect astroturfing. This relational approach requires examining accounts in comparison and in interaction with each other instead of looking at them individually. Background The 2012 South Korean Presidential Election Campaign and the National Intelligence Service In this study, we investigate the case of the 2012 presidential election campaign in South Korea. According to state prosecutors and journalists, agents of the NIS have posted hundreds of thousands of Twitter messages in order to influence public opinion in favor of electing Geun-hye Park, the presidential candidate of the ruling party, Saenuridang, and the daughter of strongman Chung-hee Park who was in power in the 1960s and 1970s.4 The first news report on this astroturfing campaign broke on December 11, 2012, less than 10 days before the election when a congressman from the opposition party Minjootonghapdang found an NIS agent in an off-site NIS office used for the astroturfing campaign. When the congressman called the police, the NIS agent, Ha-Young Kim, locked herself in for more than 40 hours, during which she deleted evidence from her computers. Although she erased more than 180 files, she was not able to erase everything. 262 Franziska B. Keller et al. The prosecutors later found a list of NIS Twitter accounts, the passwords of the accounts, the instructions from other agents, and the names and initials of the agents responsible for each one of them in a series of emails exchanged between other NIS agents from the two computers she submitted as evidence to be released from the police's surveillance (SeoulDistrictCourt, 2014, p. 49). Some of this information was allowed as evidence in court and therefore published in the court proceedings. Unsurprisingly, this story was quickly covered in Korean news outlets, and the South Korean news organization Newstapa even published a detailed report - but not using any ground truth data (Newstapa, 2013). In this study, we use the above-mentioned list published in the court proceedings and treat it as a ground truth that is independent of the online behavior we examine in the rest of the paper. The court proceedings include 1,008 account names of 716 unique Twitter accounts, all of which the prosecution found in the emails of NIS agents, or which were linked together through TweetDeck.5 During the trials, the prosecution tried to convince the judge to allow as evidence tweets from an additional 400 accounts they strongly suspected of being part of the campaign. But amid general stonewalling by the defense, the judge refused their request. We therefore have good reasons to assume that the list of 1,008 user IDs is partial, and that there are other NIS accounts active in our dataset. Data We retrieved Twitter data from a data archive managed by the Social Media and Democracy (SMAD) research group at the University of Wisconsin-Madison. The SMAD archive contains a 10% stream of global Twitter content accessed through Twitter's Gardenhose Streaming API.6 The data was collected in real time between June 1 and December 31, 2012. These seven months contain major steps in the political process, such as the election of Park as Saenuridang's candidate (August 8, 2012), Minjootonghapdang's primaries (August 25, 2012), as well as other important political events, including the arrest of the NIS agent (SeoulHigherCourt, 2015, p. 170). We use the 75 million tweets from accounts with a Korean language setting (lang = "ko") found for this period in the archive. 702 of the 1,008 known NIS accounts are active in this dataset, posting about 195,000 tweets. Some of the user_ids in the court proceedings do not appear in our dataset potentially because some accounts were not active during that six months period, did not post enough tweets to get picked up in the 10% sample, or had a non-Korean language setup in their accounts. Additional descriptive statistics can be found in Table A7 in the Supplementary Material. Characterizing Astroturfing Temporal Coordination Patterns We first explore the timing of tweets issued by the known NIS accounts. Figure 1 reveals several clear patterns: NIS accounts tweet commonly during regular office hours, while tweets by other users are posted most frequently in the after-work hours. NIS tweets are significantly less common during the weekend, while regular users post more frequently on Saturdays and Sundays. Also notable is the sudden drop in NIS tweets after their campaign was discovered on December 11 exactly when election day was approaching and regular users became more active. Political Astroturfing on Twitter 263 (c) Jun Jul Aug Sep Oct Nov Dec Jan Day — NIS accounts regular accounts Figure 1. Distribution of tweets sent by NIS accounts and regular accounts each hour of the day (a), on each weekday (b) and on each day during the research period (c). This simple analysis thus reveals the first of many patterns induced by the campaign's central organization and attempts to solve principal-agent problems. The agents reacted to central commands to begin and shut down their activity, and were apparently hired to work on a regular schedule. This conclusion is supported by the court documents, which describe daily meetings during which the agents received their tasks before heading out to work in internet cafes to hide their IP addresses (SeoulHigherCourt, 2015). Figure Al in the Supplementary Material shows that the individual accounts involved display similar activity patterns among each other.7 Almost all of the 702 accounts appear to have at least one, if not several accounts with near identical activity patterns, and these accounts often turn out to be manned by the same agent (see below). A manual inspection also reveals a group of accounts that posts primarily newspaper headlines and appears to be at least semi-automated, judging from the number of tweets they post each day. Message Coordination Patterns So far we have only looked at the timing, but not at the contents of tweets. In this section, we examine whether NIS accounts also tweeted similar contents. Retweet Networks. Retweeting is the simplest way in which an astroturfing account can amplify the campaign's message - it requires only one click. It is therefore not surprising that half (48%) of the NIS tweets are retweets - in many cases of a fellow NIS account. Figure 2 shows the retweet network among the known NIS accounts. The size of the node is proportional to the number of retweets the account receives from other NIS accounts, the saturation indicates what % of the retweets sent by that account are retweets from other NIS accounts, and the numbers add information from the court proceedings on the different agents in charge of the accounts. The filtered retweet network (b) clarifies which groups of accounts are most likely to retweet each other. Figure 2. Retweet network among NIS accounts. Node ID corresponds to agent in charge (SeoulDistrictCourt, 2014). (a) shows the complete network and (b) a filtered version where an edge is present if the retweet count is greater than two. Political Astroturfing on Twitter 265 Several things are notable: first, NIS agents did not consistently use the strategy of retweeting each other to increase their retweet count: while some accounts receive a lot of retweets in Figure 2(a), many others do not. Maybe a consistent amplification strategy was deemed too obvious. Second, many agents (such as 7, 12, 15, 16, and 19) created one or two main accounts, which their other accounts then repeatedly retweeted. Other agents appear to cooperate with each other by retweeting their colleagues: agent 10 retweets one particular account of agent 4, for instance, and agent 14 and 20 one account of agent 2. The density of the network in Figure 2(b) hints not just at how frequently the NIS accounts retweeted each other, but also at a potentially limited impact: many of the NIS tweets are spread within the campaign's network rather than by ordinary accounts. Finally, searching for accounts that predominantly retweet other NIS accounts may be a valid detection strategy: the color of most accounts is a dark gray, meaning that almost all of their retweets come from known NIS accounts. Co-tweet Networks. Instead of retweeting each other, astroturfing accounts can also simply post the same message. This strategy makes it harder for the casual observer to notice that an account did not come up with the particular message on its own. Unlike with retweeting, the observer also cannot easily see who the originator of the message is and who else may have (re)posted it. We call this coordination pattern "co-tweeting", and define it as sending the same tweet within one minute of each other. In this section, we exclude retweets ("co-retweets") from the analysis, which are the subject of the subsequent section. Co-tweeting is also a very common phenomenon in our dataset: it contains 100,000 original NIS tweets (i.e., non-retweets), of which only a bit more than half are unique.8 About 45,000 tweets are posted multiple times by different NIS accounts. Given that our data is only a 10% sample of all tweets posted, the true number may be even higher. In more than half of the instances, those identical tweets are posted in the exact same second. Our suspicion that agents used applications like TweetDeck to control their multiple accounts is confirmed in the court documents. But other agents apparently copied and pasted the message manually into the accounts under their control, and were reasonably efficient in doing this: more than 85% of all co-tweets appear within one minute of each other, and almost 99% within one work day or 10 hours. We tested varying time thresholds for the co-tweeting and obtained robust results (see section A4 in the Supplementary Material). Figure 3 shows the co-tweet network among the known NIS accounts. The heaviest co-tweeting of original content occurs among a group of (semi-)automated accounts that share links to news articles. Accounts assigned to individual agents form cohesive network clusters, in some cases even separate components. This would seem to indicate that the agents did not coordinate with each other in larger teams while posting - a finding supported by the description of them fanning out to different Internet cafes (SeoulHigherCourt, 2015). Co-retweet Networks. The third pattern of coordinating messages across accounts is that of contemporaneous retweeting of the same content. "Co-retweeting" - retweeting within a short time period the same message from an account which may or may not be part of the campaign - is again extremely common in our dataset: only 17% of the retweets are unique. In contrast to the co-tweeting, however, the co-retweeting is more spread out. Only 20% appear within one minute and 99% within one week. In the case of co- 266 Franziska B. Keller et al. ®--@- @>@--@@--@®--@®--@©--© Figure 3. Co-tweet network among NIS accounts. ID indicates assignment of agents to accounts (SeoulHigherCourt, 2015). retweeting - unlike with co-tweeting - there is a reasonable probability that two accounts that retweet the same third account are not part of a campaign, but simply two regular users who happen to follow (and retweet) the same account. But repeatedly retweeting the same accounts within the same minute does seem like an unusual pattern worth exploring. We therefore chose the same tight time window as with the co-tweets. The resulting network is shown in Figure 4. A striking 725 accounts regularly send the same retweet as another account. Unlike co-tweeting, co-retweeting thus seems to be a strategy employed by almost every NIS agent and account, even the heavily automated ones. The network itself is composed of many densely connected components, which again reflects the division by agent. The three measures we created have shown that the South Korean secret service's astroturfing campaign left a quite visible pattern: the accounts involved frequently posted the exact same content within a very short time span. Tables A4-A6 in the Supplementary Material present network statistics for the three networks resulting from message coordination. The network density in all cases differs significantly from the network density among random groups of users. The court proceedings reveal why this is so: the agents had to submit a regular summary of their work to the chief officer, containing information such as the number of tweets sent, number of followers gained during a specific period, and number of right-wing opinion leaders' tweets they retweeted. The agents also Political Astroturfing on Twitter 267 Figure 4. Co-retweet network of NIS accounts. ID indicates assignment of agents to accounts (SeoulHigherCourt, 2015). reported the names and passwords of the accounts they managed so that the chief officer could monitor their activities more directly (SeoulHigherCourt, 2015, pp. 76, pp. 81). However, their supervisors do not seem to have provided incentives for creating believable individual personas. As a result, the agents' Twitter accounts resemble each other in terms of activity and content. Detection and Validation of Additional Astroturfing Accounts In this section, we show that we can use the retweet, co-tweet and co-retweet networks to identify almost a thousand additional accounts that share the same suspicious pattern of message coordination. And while their account names are not mentioned in the court documents, they are likely also part of the campaign: their hourly, daily, and weekly activity resembles that of NIS accounts, they have similar Twitter IDs, retweet the same 268 Franziska B. Keller et al. accounts and tweet on the same subjects as known NIS accounts. Most importantly, only a handful of those suspect accounts has been active on Twitter since January 2013. Detection of Additional NIS Accounts Detection based on Retweet Networks. One of the most common approaches suggested for identifying astroturfing is to check who retweets or follows known astroturfers. The problem with this approach is that if the campaign has an impact and convinces regular users to retweet their message, they will be counted among the astroturfing accounts. This leads to a drastic overestimation of the campaign size, and an underestimation of the campaign's influence. The approach clearly requires more nuance. Almost exclusively retweeting astroturfing accounts, for instance, would certainly strike one as suspicious -and many NIS accounts' retweets indeed consist of more than 90% NIS tweets. In our first detection approach, we therefore assume that accounts exceeding a specific % of NIS retweets are NIS accounts as well. We then recalculate the fraction using the new set of known and suspected NIS accounts, and evaluate if there are additional accounts that should be included. We continue this process until no new accounts are added to the list of NIS suspects. Too low a threshold of course ends up declaring almost all accounts suspicious, but even a relatively moderate threshold of 50% stops after having identified an additional 204 accounts. Detection based on Co-tweet Networks. The complete co-tweet network of all accounts in our database with a time window of one minute consists of 2,001 accounts and 38,035 unique instances of co-tweeting (see network Figure A8 in the Supplementary Material) -after dropping any tie between accounts that co-tweet only once to reduce the noise in the data. The biggest component of the network consists of 730 accounts, of which 68 are known NIS accounts. The second biggest component with 67 accounts is entirely made up of NIS accounts. We manually checked all remaining components of the network containing no known NIS accounts for other potential suspect accounts, but they were either composed of obvious bots tweeting about unrelated topics or contain very generic co-tweets ("Hey new Twitter user, please follow me back"). Detection based on Co-retweet Networks. This network contains many densely connected components, where a few are solely made up by NIS accounts (see Figure A9 in the Supplementary Material). There are 440 accounts located on components with known NIS accounts. Other components could of course also contain NIS suspects, but a manual investigation of all components with more than five accounts reveals mostly bot-like accounts unrelated to the campaign. Validation We identify 834 unique additional accounts.9 We cannot know for sure that the suspect accounts are indeed controlled by NIS agents, but in this section, we produce supporting evidence. Figure 5(b) displays the % of tweets posted on a given day of the week for each group of suspects identified by the different methods. The weekends tend to be low-activity days for all suspects, irrespective of the detection approach - just as they are for Political Astroturfing on Twitter 269 (a) (b) 0.100- 0.20- 0.00- 10 15 20 Mon Tue Wed Thu Fri Sat Sun Hour Weekday —NIS - -regular ■■■■retweet --co-tweet --co-retweet Figure 5. Distribution of tweets posted by different groups on a given hour during the day (a) and a given weekday (b). the known NIS accounts. Regular users, on the other hand, are most active on Saturdays and Sundays. The same similarity also holds for working hours. Figure 5(a) shows the density of the different group's hourly activity. Suspects - no matter with which method they were identified - tend to follow the office hour pattern of known NIS accounts. The suspects based on retweeting most closely resemble the known NIS accounts, but even the suspects based on co-tweeting and co-retweeting all display a markedly different pattern than the regular users. Figure A2 in the Supplementary Material examines the suspects' daily activity throughout the research period and also finds similarities with that of known NIS accounts. Judging from their Twitter User ID, suspicious accounts were also created around the times when many known NIS accounts were. The Twitter User ID tells us when an account was created, because Twitter assigns the number in ascending order to new accounts. Figure 6 shows the distribution of Twitter Users IDs for known NIS accounts (top), for the suspect accounts (middle) and for a random sample of regular users (bottom). As the density lines indicate, regular accounts are relatively evenly distributed, while NIS accounts and suspect accounts are younger on average than regular users and were created during specific, often similar, time intervals. This pattern corresponds to the Figure 6. Distribution of Twitter User IDs for known NIS accounts, suspect accounts, and a random sample of 1,000 regular users. 270 Franziska B. Keller et al. Table 1 Current status of accounts in database Type Total Active Inactive Suspended Deactivated Random sample 5,000 40.1% 7.3% 3.5% 37.5% MS 702 0.5% 1.0% 2.0% 96.5% Retweet (50%) suspects 204 0.9% 2.9% 11.76% 84.31% Co-tweet suspects 662 7.7% 9.8% 17.9% 64.5% Co-retweet suspects 440 7.9% 12.3% 6.8% 72.9% testimonials from NIS agents that the agents routinely created about 20 or more Twitter accounts at once (SeoulHigherCourt, 2015, p. 78). But the clearest indication that the suspects are not just ordinary users who fell for the campaign's message is the current status of the accounts. As mentioned earlier, the NIS reacted to the fact that its campaign was discovered by frantically shutting down the accounts involved. It is therefore unlikely that any true suspect account would still be active. Almost all of the known NIS accounts have been either deactivated by the user, suspended by Twitter, or have been inactive since January 2013. The same is true for a large part of suspect accounts, as Table 1 shows: while 40% of all non-suspect accounts are still active, less than 10% of the suspect accounts are. The lowest rate of still active accounts is among the suspects discovered by the method based on retweeting patterns: only 0.9% are still active, while 96% have been deactivated or suspended - a rate close to that of the known NIS accounts (98.5%). The rate of active accounts is somewhat higher among the suspects identified using co-tweet (7.7%) and co-retweet networks (7.9%), but still much lower than among the regular users.10 How does our detection strategy compare to established approaches? We are not aware of existing methods that allow for the detection of astroturfing, but there is a burgeoning literature on the identification of automated accounts. Yet, there are considerable conceptual differences between the detection of bots and astroturfing: the former method can identify automated accounts, even if they are not part of any secret campaign, the latter participants in astroturfing campaigns, even if they are humans. Our relational detection method is based on metrics derived from group-based behavior of accounts, irrespective of whether the behavior is automated or not. We argue that our approach is therefore theoretically superior and shows this empirically in the comparison with existing (automated bot detection) methods in Supplementary Material Section A6. Comparing Content Similarity In this section, we conduct a quantitative and qualitative analysis of the Twitter content posted by known and suspect NIS accounts. Participants of information campaigns coordinate the messages they post based on top-down instructions from their principals (SeoulHigherCourt, 2015, p. 240). More specifically, the NIS wanted to promote the Saenuridang's conservative ideas, draw attention away from negative news about that party, denigrate liberal candidates, raise divisive issues on North Korea, and pollute the general public discourse on social media platforms dominated by liberals (SeoulHigherCourt, 2015, p. 145). We use content analysis not just to provide a richer Political Astroturfing on Twitter 271 understanding of the NIS' strategy, but also as yet another validation for the detection of suspect accounts. We compare NIS contents to three different baselines. We take a random sample from the tweets by regular users in our dataset to create a base rate estimate of word use by average users (tweets TV = 867,736, users TV = 235,624). Second, we construct a political sample of tweets based on an extensive list of political keywords assembled by Song, Kim, and Jeong (2014) (tweets N= 5,840,159, users N= 300,253). The third baseline is created using all 27,606 tweets posted by 58 opinion leaders as identified by Bae, Son, and Song (2013).11 We use these datasets for three types of content analyses.12 First, we analyze word frequencies to examine the distinctive patterns of word use. Tweets posted by known and suspect NIS accounts should reflect these goals and strategies, and therefore differ significantly from those posted by regular users, political users, and opinion leaders (Broniatowski et al., 2018). Second, we identify the Twitter accounts most commonly retweeted by the five different groups. Retweeting is often used to amplify the impact of a campaign (Yang & Kim, 2017), and retweet patterns usually reflect political divisions among Twitter users (Conover et al., 2011). Thus, looking at the accounts retweeted by the NIS should help us understand the goals and underlying motives of the astroturfing campaign. Third, we examine how the themes discussed among the different groups vary over time. Since the agents received centralized instructions (SeoulDistrictCourt, 2014; SeoulHigherCourt, 2015), we would expect their accounts to use specific keywords that are related to their campaign motives. Comparing Keywords. We tested whether the NIS accounts generated more politically slanted tweets than average users and whether the NIS suspects produced similar content (see the 50 most popular keywords for each group in Table A2 of the Supplementary Material). The known and the suspected NIS accounts are significantly more likely to tweet about politics than average users. Many of their popular keywords are related to politics in general (e.g., presidential election, voting, the names of the candidates), whereas popular keywords used by random users are mostly about daily conversation (e.g., lol, emojis, I am). Intriguingly, the most common keywords used by NIS accounts often relate to the hardline stance against North Korea that are popular among the conservative party. They frequently mentioned "North Korea (-^-£_r)" and "The North (it)" and the names of "the Kim family (SStt, S § SI, S il^)", use the politically charged keyword "North Korea followers (^-i^-)" and talk about the contentious "missile s)" tests. These terms do not appear among the most popular keywords of the regular users. To quantify the similarities of the keywords used by different groups, we calculated Kendall's Rank Correlation Coefficient. The results are telling. The rank of the most frequently used keywords by the NIS accounts and the NIS suspects are strongly correlated (top 50 keywords: r = 0.61 ,p< .001; top 100 keywords: r = 0.66,p< .001). In contrast, top NIS keywords are negatively correlated with the most frequently used keywords by regular users (top 50 keywords: r = —0.45 , p< .001 ; top 100 keywords: r = —0.43 , p< .001), and the keywords used by the opinion leaders (top 50 keywords: r = —0.30, p< .001 ; top 100 keywords: r = —0.25 , p< .001 ). Further, they show no statistically significant relationship with the most frequently used keywords by the political sample. The patterns of correlation between the NIS suspects and other groups closely resemble the correlations of known NIS accounts. 272 Franziska B. Keller et al. Comparing Retweeted Users. Next, we examine the most retweeted accounts among the five groups. In a campaign context, retweeting can be used to strategically boost the visibility of the campaign message and build networks of like-minded others. Retweeting patterns therefore also reveal the main message a campaign wants to convey. Not surprisingly, the popular Twitter accounts retweeted by the NIS accounts are right-wing opinion leaders that the NIS officials wanted to systematically promote (SeoulHigherCourt, 2015, pp. 53; see Table A3 in the Supplementary Material). For instance, WonJae Jang (@yark991), the most popular account, is a right-wing pundit on a conservative news channel. Other popular accounts include well-known conservative pundit Hee-jae Byun (@pyein2), political bloggers Jong-won Kang (@koreaspiritnana) and Mi-hong Jung (@naya2816), right-wing activists Jung-hoon Yoon (@junghooo-nyoon) and Joo-jin Yoon (@yoonjoojin), as well as other NIS accounts. When comparing these accounts to the popular Twitter accounts retweeted by regular users and the political sample, the differences are obvious. The NIS accounts rarely retweeted the liberal accounts that were highly popular among other groups (e.g., Oi-soo Lee @oisoo, Yong-min Kim @funronga, Jung-kwon Jin @unheim, RainMaker @mettayoon), which again reveals their intentions. Temporal Similarities. The court records also indicate that NIS agents were usually given instructions from their principals through offline meetings and emails (SeoulHigherCourt, 2015, p. 240). We would thus expect the known and suspect NIS accounts to tweet about similar matters at the same time. To test this hypothesis, we compare two sets of political keywords. The first set of keywords are general election-related keywords such as presidential election ("CH£|"), politics candidate and the names of the leading presidential candidates. The second set of keywords are highly polarizing political keywords particularly related to North Korea. North Korea is one of the most prominent topics that triggers fear and nationalism among the public and boosts support for the conservative party. Due to the powerful priming effects of North Korea, conservative politicians have used the threats from North Korea (and even created pseudo events) to frame liberals as North Korea sympathizers to win the elections. The NIS officials also instructed to push North Korean issues and described the agents participating in a "psychologocal warfare against the North" (SeoulHigherCourt, 2015, p. 66). Thus, we expect to find a unique focus on keywords related to North Korea among NIS and NIS suspects.13 Figure 7 shows the distinctive temporal patterns of these two different sets of keywords. Keywords referring to the election generated a fair amount of attention throughout our research period in all five groups and the over-time patterns of the popularity of these keywords are similar across different groups. In contrast, keywords related to North Korea were highly popular only among the NIS and the suspects, but barely drew any attention from non-NIS accounts. The sharp spikes before the election were only visible in the NIS-related accounts. This pattern is consistent with what the court records describe as attacking those suspected of sympathy with North Korea (i.e., North Korea followers "i?-^-") and liberals (SeoulHigherCourt, 2015, p. 241). We again reveal a substantial similarity between NIS accounts and suspects. Political Astroturfing on Twitter 273 0.05 0.04 0.03 £ 0.02 Candidate (?M) (a) Temporal changes of general election-related keywords GeunHye Park(9)231) JaeInMoon(£3le!) President(tHgg) Presidential Elections*!) Jan Jul Jan Jul Jul Jan Jul Jan JungEun Kim(iJ9£) (b) Temporal changes of North Korea-related keywords Missile (n|A|iJ) North Korea (MB) North Korea followers (gM) Yeonpyeongdo (Q3JE) 0.03-0.02-o 0.01 -C 0.00-g-0.03- ct 0.02-0.01 -0.00- Jan Jul Jan Jul Oct Time Jan Jul Oct Jul — NIS — NIS suspects — Opinion leaders Political sample Random sample Figure 7. Temporal changes in the use of keywords. The Impact of Astroturfing Having established that the disinformation campaign was larger than portrayed in the court documents, we now turn to the question of its impact. Several factors should make the NIS astroturfing campaign a best case scenario for substantially influencing public opinion: (1) A highly professionalized organization employed actual humans, native speakers familiar with local politics and culture. (2) The campaign was flexible and adjusted daily to changing political conditions (SeoulHigherCourt, 2015, p. 239). (3) Without previous publicly known cases of astroturfing, only few citizens would have suspected that political actors manipulate social media. (4) The 2012 Korean elections took place in a similarly polarized landscape like the U.S. elections 2016, which is conducive to disinformation being popularized in partisan parts of social networks (Faris et al., 2017). Figure 8 shows the campaign's Twitter impact in terms of three commonly used metrics (e.g., Yang & Kim, 2017) available in our data - the number of followers, the number of retweets received, and the number of @-mentions received - displaying the impact of the mean account as vertical line and the distribution as a kernel density plot. We use regular accounts as well as the group of opinion leader accounts as comparison. NIS and suspect accounts have a very similar follower distribution and mean (1,837 and 1,657, respectively). In fact, the vast number of accounts have around 1,800 M 0>) ,.s. _ W No. of followers No. of mentions received No. of retweets received — NIS ..... regular — opinion leaders — suspects Figure 8. Twitter influence of different groups of accounts based on number of followers (a), @-mentions (b) and retweets (c) received. Vertical lines show the mean for the respective group. 274 Franziska B. Keller et al. followers, a phenomenon most easily explained by a coordinated strategy of mutually following all other NIS accounts to boost follower counts. This strategy means most NIS accounts had more followers than the regular accounts in our data (431), although they do not even get close to the numbers accumulated by opinion leaders (69,611). Known NIS (6.7 @-mentions) and suspect accounts (6.26 @-mentions) nevertheless receive less mentions than ordinary users (25.2 @-mentions), a finding also common for bots (Varol et al., 2017). NIS (69 retweets) and suspect accounts (23.1 retweets) do receive more retweets on average than regular users (7.9 retweets). On both measures, they pale in comparison with the opinion leaders, who receive 2,658 mentions and 8,400 retweets on average (note that the x-axis is on a log scale). However, we already know that many of the accounts retweeting NIS accounts are themselves NIS accounts. So what is the actual impact of the campaign among regular users, and does our assessment change once we include the suspect accounts? The 702 known NIS accounts posted almost 100,000 original tweets, the suspects another 605,000. Yet with our dataset containing around 60 million original tweets, this is barely more than 1 %. As our dataset is a 10 % sample of the whole Twitter stream, we would therefore conclude that the NIS astroturfing campaign could have posted more than 7 million original tweets. Did these tweets have an impact? The known NIS accounts were retweeted around 50,000 times - as were the suspect accounts. Figure 9 shows that this is a minuscule fraction of all retweets in our data (1st bar). Zooming in on the retweets of NIS accounts (2nd bar) we can see that about 40% of the retweets occur within the campaign. In other words, the campaign is in fact only about half as effective as one would assume from a mere retweet count. This assessment only marginally improves (a) Retweets of H NIS by NIS | NIS by regular users | Opinion leaders | Regular users 0 5,000,000 10,000,000 15,000,000 0 20,000 40,000 (b) Retweets of H NIS by NIS NIS by regular users | Opinion leaders | Regular users 0 5,000,000 10,000,000 15,000,000 0 20,000 40,000 60,000 80,000 100^000 Figure 9. Received retweets by different groups of users, (a) only includes known NIS accounts and (b) includes suspect users in the group of NIS accounts. Political Astroturfing on Twitter 275 when including the suspects among our NIS accounts (3rd and 4th bar): the suspect accounts appear to be somewhat more successful in gathering retweets from regular users, but even after doubling the campaign's retweet count, their share of all retweets in the dataset is still negligible. The picture remains the same if we only look at political tweets (see Figure A6 in the Supplementary Material). Finally, we also examine if like-minded accounts (accounts retweeted by the NIS campaign) are retweeted more often by regular users while the NIS campaign is ongoing (see Figure A7 in the Supplementary Material), or if keywords favored by the NIS are used more often by regular users during that time (Figure 7). However, neither figure indicates that the NIS campaign successfully boosted accounts sympathetic to them or that they were able to influence the overall discussion with their agenda. Why did this orchestrated campaign only have a modest impact on Twitter? One reason might be that many astroturfing accounts needed to start from scratch and thus lacked the necessary reputation to attract large followings, retweets or @-mentions. In addition, the agents may well have become more concerned about fulfilling the specific and easy-to-supervise task of posting or retweeting a large number of messages rather than achieving the overarching, but hard to measure, goal of swaying online political opinion. The "copy and paste" tactics observed in the co-tweet network likely did little to engage with real social media users. Principal-agent problems thus not only increase the chance of detection, but also limit the effectiveness of astroturfing campaigns. Conclusion We have examined one of the first confirmed cases of electoral astroturfing during the South Korean presidential election 2012 to enhance our understanding of disinformation campaigns more generally. We argue that the mere act of deceiving the online audience about the centrally initiated and organized nature of the campaign should be considered to be disinformation, even if the information spread is truthful. Conceptualizing disinformation more broadly points toward potentially greater detrimental effects: if citizens do not only have to worry about the truthfulness of information ("is this true?") but also about the identity and intentions of their conversation partners, general trust in online discussion venues and their potential for political discourses will further erode. Because a ground truth is rarely available, systematic research into astroturfing campaigns is lacking. We have exploited the availability of such data in our case to show that astroturfing accounts exhibit distinct activity patterns caused by its centralized nature and the principal-agent problems associated with it. As the latter are inherent to any bureaucratic operation, these patterns should be near universal and difficult to hide, and hence detectable in other (suspected) cases of astroturfing. Even if no ground truth is available, our methodology may help detect such campaigns by highlighting groups of accounts with a suspicious amount of coordination. For instance, in a co-tweet or co-retweet network, astroturfing accounts are most likely to be found in the largest component of such a network. This will result in a substantively useful set of accounts potentially involved in astroturfing which can then be further scrutinized by researchers. In contrast, other generic methods like automated bot detection tools would flag lots of unrelated accounts like commercial spam bots flooding political hashtags. The main advantage of our study - the availability of a ground truth thanks to the public court proceedings - is at the same time a limiting aspect: like most such data, it only became available after the campaign had concluded. Hence, we had to reconstruct 276 Franziska B. Keller et al. the data for 2012 with a general and incomplete sample of the Twitter stream. This also made it even harder to measure the online impact of the campaign, and impossible to evaluate its impact on the actual election. However, as the former was limited (as best as we can tell), we have no reason to assume that the NIS campaign substantially affected election results. This finding is in itself important, because if even a powerful and well-financed organization like the South Korean secret service cannot instigate a successful disinformation campaign, then this may be more difficult than often assumed in public debates. It has been shown that especially social endorsements drive the selection and hence the popularity of (political) online contents (Messing & Westwood, 2014). But many astroturfmg accounts lack the credibility and social embeddedness of a "real world" profile. As a result, disinformation seems to be reliant on a larger, complicit media ecosphere in order to come into full effect (Faris et al., 2017; Lukito et al., 2018). Thus, even though the NIS tried to take advantage of political predispositions and the polarization of the South Korean political landscape, their message may not have traveled well on a predominantly liberal social media platform. But it is of course possible that astroturfmg has become more sophisticated in the last seven years. We have presented arguments for why our methodology is conceptually better suited to detect coordinated disinformation campaigns on social media than predominant approaches focusing on automated activity. We could only partially test this argument since the NIS deleted the involved accounts and with that the data necessary for state of the art bot detection methods (Varol et al., 2017). We can say, however, that commonly applied activity thresholds to flag automated accounts (Howard & Kollanyi, 2016) would have misrepresented the scope of the NIS operation. This issue, finally, points toward a severe policy problem: if social media platforms do indeed permanently delete data of accounts implicated in disinformation campaigns -or at least interpret privacy protection laws in a way that also covers accounts that are not human beings, but fake online personas - then research on disinformation will be severely hampered. Currently, prosecutors and academics usually do not have the means to investigate disinformation ex post, when more external information has become available. These restrictions in data access impede our understanding of the character, scope and effects of coordinated disinformation campaigns. Acknowledgments We thank three reviewers, the editors, Rebecca Adler-Nissen, Pablo Barbera, Yevgeniy Golovchenko, Shahryar Minhas, Jennifer Pan, Suzanne Robbins, Molly Roberts, and seminar participants at APSA, EPSA, ICA, MPSA, SVPW, the ASNA seminar in Bern, Digital Disinformation Workshop at the University of Copenhagen, Communication Crossroads at the University of Wisconsin-Madison, the Hng-L group at the University of California-San Diego, Hong Kong Baptist University, and the University of Manchester for helpful comments on previous versions of this paper. We are grateful to Dhavan Shah and the Social Media and Democracy Research Group at the University of Wisconsin-Madison for their support and Hyeonjong Min for his research assistance. We also thank the VolkswagenStiftung for inviting us to a workshop where we were able to initiate this project. All remaining errors are our own. Political Astroturfing on Twitter 277 Data availability statement The data described in this article are openly available in the Open Science Framework at https://osf.io/tpa6u/ Open Scholarship 00 This article has earned the Center for Open Science badges for Open Data and Open Materials through Open Practices Disclosure. The data and materials are openly accessible at https://osf.io/tpa6u/ Disclosure statement No potential conflict of interest was reported by the authors. Funding Franziska Keller thanks the Swiss National Science Foundation for the postdoc. mobility grant that allowed her to spend time on this project. This work was partially supported by the National Research Foundation of Korea (Grant NRF-2016S1A3A2925033). Notes 1. https://www.reddit.eom/r/conspiracy/commerits/8zja60/guide how to run abot farm. 2. https://qz.eom/311832/hacked-emails-reveal-chinas-elaborate-and-absurd-internet-propaganda-machine. 3. http://www.rferl.org/content/how-to-guide-russian-trolling-trolls/26919999.html. 4. http://www.nytimes.com/2013/ll/22/world/asia/prosecutors-detail-bid-to-sway-south-korean-election.html. 5. TweetDeck is a third-party application for Twitter. Once a user authorizes her Twitter accounts in the application, she can connect and control multiple Twitter accounts in a single application interface. The court ruled that the accounts linked on TweetDeck should be considered as controlled by the NIS. 6. Morstatter, Pfeffer, Liu, and Carley (2013) have shown that the Twitter Streaming API does not necessarily provide a random sample of tweets. But we cannot assess this bias ex post, because Twitter is not transparent about their sampling criteria. 7. For the ordering of accounts we use a blockclustering approach discussed in detail elsewhere (Keller, Schoch, Stier, & Yang, 2017). 8. We consider two tweets to be identical if their text (excluding any URL present in the tweet) is exactly the same. 9. Some of the accounts switched their names. The 834 accounts are therefore associated with 861 different account names. Table Al in the Supplementary Material provides an overview of the number of suspects identified by each identification strategy and the overlaps between the three methods. 10. The numbers for regular users do not add up to 100%, as 11.6% of randomly selected users had private accounts. These accounts still exist but we cannot determine if they are active or inactive. 278 Franziska B. Keller et al. 11. The list of opinion leaders includes major politicians, Twitter celebrities, popular podcasters, journalists, and political activists who were active during the 2012 election campaign. 12. For each dataset, we randomly selected 10,000 tweets per group and preprocessed the text by removing URLs, special characters, punctuations, emojis, converting all words into lowercase, eliminating unnecessary white spaces, and extracting retweeted Twitter usernames. In addition, we constructed a stemming dictionary based on the current data and stemmed popular keywords. The final dataset was built after removing sparse words that appear less than ten times in the whole data. 13. The keywords shown here were selected from the most popular keywords used by the NIS accounts shown in Table A2 in the Supplementary Material. Supplementary Materials Supplemental data for this article can be accessed on the publisher's website at https:// doi.org/10.1080/10584609.2019.1661888. Replication files are available at OSF: https:// doi.org/10.17605/OSF.IO/TPA6U ORCID Franziska B. Keller © http://orcid.org/0000-0001-9728-7447 David Schoch © http://orcid.org/0000-0003-2952-4812 Sebastian Stier http://orcid.org/0000-0002-1217-5778 JungHwan Yang © http://orcid.org/0000-0001-5807-0982 References Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236. doi: 10.1257/jep.31.2.211 Badawy, A., Ferrara, E., & Lerman, K. (2018). Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign. Retrieved from https://arxiv.org/abs/ 1802.04291 Bae, J.-H., Son, J.-E., & Song, M. (2013). Analysis of Twitter for 2012 South Korea presidential election by text mining techniques. Journal of Intelligence and Information Systems, 19(3), 141-156. doi:10.13088/jiis.2013.19.3.141 Bennett, W. L., & Segerberg, A. (2013). The logic of connective action: Digital media and the personalization of contentious politics. New York, NY: Cambridge University Press. Bode, L., Hanna, A., Yang, J., & Shah, D. V. (2015). Candidate networks, citizen clusters, and political expression: Strategic hashtag use in the 2010 midterms. The ANNALS of the American Academy of Political and Social Science, 659(1), 149-165. doi:10.1177/0002716214563923 Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., ... Dredze, M. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108(10), 1378-1384. doi: 10.2105/ AJPH.2018.304567 Cao, Q., Yang, X., Yu, J., & Palow, C. (2014). Uncovering large groups of active malicious accounts in online social networks. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (pp. 477-488). Chadwick, A. (2013). The hybrid media system: Politics and power. Oxford, UK: Oxford University Press. Chu, Z., Gianvecchio, S., Wang, H., & Jajodia, S. (2012). Detecting automation of Twitter accounts: Are you a human, bot, or cyborg? IEEE Transactions on Dependable and Secure Computing, 9(6), 811-824. doi:10.1109/TDSC.2012.75 Political Astroturfing on Twitter 279 Conover, M., Ratkiewicz, J., Francisco, M., Goncalves, B., Menczer, F., & Flammini, A. (2011). Political polarization on Twitter. Proceedings of the Fifth International AAAI Conference on Web and Social Media (pp. 89-96). Director of National Intelligence. (2017). Assessing Russian activities and intentions in recent US elections. Retrieved from https://www.dni.gov/files/documents/ICA 2017 01.pdf Faris, R., Roberts, H., Etling, B., Bourassa, N, Zuckerman, E., & Benkler, Y. (2017). Partisanship, propaganda, and disinformation: Online media and the 2016 U.S. presidential election (Berkman Klein Center Research Publication 2017-6). Retrieved from https://ssrn.com/ abstract=3019414 Grimme, C, Assenmacher, D., & Adam, L. (2018). Changing perspectives: Is it sufficient to detect social bots? International Conference on Social Computing and Social Media (pp. 445-461). Howard, P. N. (2006). New media campaigns and the managed citizen. Cambridge, UK: Cambridge University Press. Howard, P. N., & Kollanyi, B. (2016). Bots, #Strongerin, and #Brexit: Computational propaganda during the UK-EU referendum. Retrieved from http://politicalbots.org/wp-content/uploads/ 2016/06/COMPROP-2016-1 .pdf Jackson, D. (2017). Distinguishing disinformation from propaganda, misinformation, and "fake news". National Endowment for Democracy. Retrieved from https://www.ned.org/issue-brief-distinguishing-disinformation-from-propaganda-misinformation-and-fake-news Jungherr, A. (2016). Twitter use in election campaigns: A systematic literature review. Journal of Information Technology & Politics, 13(1), 72-91. doi: 10.1080/19331681.2015.1132401 Keller, F., Schoch, D., Stier, S., & Yang, J. (2017). How to manipulate social media: Analyzing political astroturfing using ground truth data from South Korea. Proceedings of the Eleventh International AAAI Conference on Web and Social Media (pp. 564-567), Menlo Park, CA. The AAAI Press. King, G., Pan, J., & Roberts, M. E. (2017). How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. American Political Science Review, 111 (3), 484-501. doi:10.1017/S0003055417000144 Lilleker, D. G., & Koc-Michalska, K. (2017). What drives political participation? Motivations and mobilization in a digital age. Political Communication, 34(1), 21-43. doi:10.1080/ 10584609.2016.1225235 Linvill, D. R., & Warren, P. L. (2018). Troll factories: The internet research agency and state-sponsored agenda building. Department of Communication, Clemson University. Retrieved from http://pwarren.people.clemson.edu/Linvill Warren TrollFactory.pdf Lukito, J., Wells, C, Zhang, Y., Doroshenko, L., Kim, S. J., Su, M.-H., ... Freelon, D. (2018, March 20). The Twitter exploit: How Russian propaganda infiltrated U.S. news. Retrieved from https ://uwmadison. app .box. com/v/TwitterExploit Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data and Society Research Institute. Retrieved from https://datasociety.net/output/media-manipulation-and-disinfo-online Messing, S., & Westwood, S. J. (2014). Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research, 41(8), 1042-1063. doi:10.1177/0093650212466406 Miller, G. J. (2005). The political evolution of principal-agent models. Annual Review of Political Science, 8(1), 203-225. doi:10.1146/annurevpolisci.8.082103.104840 Morstatter, F, Pfeffer, J., Liu, H., & Carley, K. M. (2013). Is the sample good enough? Comparing data from Twitter's Streaming API with Twitter's Firehose. Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media (pp. 400-408). Neuman, W. R., Guggenheim, L., Jang, S. M., & Bae, S. Y. (2014). The dynamics of public attention: Agenda-setting theory meets big data. Journal of Communication, 64(2), 193-214. doi:10.1111/jcom.l2088 280 Franziska B. Keller et al. Newstapa. (2013). Reporting on manipulation of internet public opinion by South Korea's spy agency before 2012 presidential election. Retrieved from http://de.slideshare.net/newstapa/ south-korean-spy-agencys-illegal-campaigning-on-sns Ross, S. A. (1973). The economic theory of agency: The principal's problem. The American Economic Review, 63(2), 134-139. SeoulDistrictCourt. (2014). Case ID: 2013GoHap577, 2013GoHapl060. Seoul, South Korea. SeoulHigherCourt. (2015). Case ID: 2014No2820. Seoul, South Korea. Silverman, C, & Alexander, L. (2016). How teens in the Balkans are duping Trump supporters with fake news. Buzzfeed News. Retrieved from https://www.buzzfeednews.com/article/craigsilver-man/how-macedonia-became-a-global-hub-for-pro-trump-misinfo Song, M., Kim, M. C, & Jeong, Y. K. (2014). Analyzing the political landscape of 2012 Korean presidential election in Twitter. IEEE Intelligent Systems, 29(2), 18-26. doi:10.1109/MIS.2014.20 Stier, S., Bleier, A., Lietz, H, & Strohmaier, M. (2018). Election campaigning on social media: Politicians, audiences and the mediation of political communication on Facebook and Twitter. Political Communication, 35(1), 50-74. doi: 10.1177/1461444817709282 Stukal, D., Sanovich, S., Bonneau, R., & Tucker, J. A. (2017). Detecting bots on Russian political Twitter. Big Data, 5(4), 310-324. doi:10.1089/big.2017.0038 Thorson, K., & Wells, C. (2016). Curated flows: A framework for mapping media exposure in the digital age. Communication Theory, 26(3), 309-328. doi:10.1111/comt.l2087 Vaccari, C. (2017). Online mobilization in comparative perspective: Digital appeals and political engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), 69-88. doi:10.1080/10584609.2016.1201558 Varol, O., Ferrara, E., Davis, C, Menczer, F, & Flammini, A. (2017). Online human-bot interactions: Detection, estimation, and characterization. Proceedings of the Eleventh International AAAI Conference on Web and Social Media (pp. 280-289). Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359 (6380), 1146-1151. doi: 10.1126/science.aap9559 Walker, E. T. (2014). Grassroots for hire: Public affairs consultants in American democracy. New York, NY: Cambridge University Press. Wardle, C, & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Council of Europe Report DGI(2017)09). Retrieved from https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc /168076277c Yang, J., & Kim, Y. M. (2017). Equalization or normalization? Voter-candidate engagement on Twitter in the 2010 U.S. midterm elections. Journal of Information Technology & Politics, 14 (3), 232-247. doi: 10.1080/19331681.2017.1338174 Zannettou, S., Caulfield, T., De Cristofaro, E., Sirivianos, M., Stringhini, G., & Blackburn, J. (2018). Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the web. Retrieved from https://arxiv.org/pdf/1801.09288.pdf