https://doi.org/10.1177/20563051221150412 Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons AttributionNonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Social Media + Society January-March 2023: 1­–13  © The Author(s) 2023 Article reuse guidelines: sagepub.com/journals-permissions DOI: 10.1177/20563051221150412 journals.sagepub.com/home/sms Special Issue: Multidisciplinary approaches to mis- and disinformation studies Introduction Concerns about misinformation are rising the world over. Today, Americans are more concerned about misinformation than sexism, racism, terrorism, or climate change (Mitchell et al., 2019). Most of them (60%) think that “made-up news” had a major impact on the 2020 election (Auxier, 2020). Internet users are more afraid of fake news than online fraud and online bullying (World Risk Poll, 2020). Numerous scientific and journalistic articles claim that online misinformation is the source of many contemporary sociopolitical issues while neglecting deeper factors such as the decline of trust in institutions (Benkler et al., 2018; Bennett & Livingston, 2020) or in the media (Newman et al., 2020). Yet, the scientific literature is clear. Unreliable news, including false, deceptive, low-quality, or hyper partisan news, represents a minute portion of people’s information diet (Allen et al., 2020; Altay et al., 2022; Cordonier & Brest, 2021); most people do not share unreliable news (Grinberg et al., 2019; Guess et al., 2019); on average people deem fake news less plausible than true news (Acerbi et al., 2022); social media are not the only culprit (Benkler et al., 2018; Bennett & Livingston, 2020; Tsfati et al., 2020); and the influence of fake news on large sociopolitical events is overblown (Guess et al., 2020; Mercier, 2020; Nyhan, 2020). Because of this disconnect between public discourse and empirical findings, a growing body of research argues that alarmist narratives about misinformation should be understood as a “moral panic” (Anderson, 2021; Carlson, 2020; Jungherr & Schroeder, 2021; Mitchelstein et al., 2020). In particular, these narratives can be characterized as one of the 1150412SMSXXX10.1177/20563051221150412Social Media + SocietyAltay et al. research-article20232023 1 University of Oxford, UK 2 Fondation Nationale des Sciences Politiques (FNSP), France 3 Centre for Culture and Evolution, Division of Psychology, Brunel University London, UK *These authors contributed equally. Manon Berriche is now affiliated to médialab, Sciences Po Paris, France and Alberto Acerbi is also affiliated to Department of Sociology and Social Research, University of Trento, Italy Corresponding Author: Manon Berriche, médialab, Sciences Po Paris, 1 place St Thomas d’Acquin, 75007, Paris, France. Email: manon.berriche@sciencespo.fr Misinformation on Misinformation: Conceptual and Methodological Challenges Sacha Altay1* , Manon Berriche2* , and Alberto Acerbi3 Abstract Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges. Keywords misinformation, misperceptions, social media, conspiracy theories, big data, audience research 2 Social Media + Society many “technopanics” (Marwick, 2008) that have emerged with the rise of digital media. These panics repeat themselves cyclically (Orben, 2020), and are fueled by a wide range of actors such as the mass media, policymakers, and experts (Cohen, 1972, p. 8).Awell-known example is the broadcast of Orson Welles’ radio drama The War of the Worlds, in 1938, which was quickly followed by alarmist headlines from the newspaper industry claiming that Americans suffered from mass hysteria. Some years later, a psychologist Hadley Cantril gave academic credence to this idea by estimating that a million Americans truly believed in the Martian invasion (Cantril et al., 1940). Yet, his claim turned out to be empirically unfounded. As Brad Schwartz (2015, p. 184) explains “With the crude statistical tool of the day, there was simply no way of accurately judging how many people heard War of the Worlds, much less how many of them were frightened by it. But all the evidence—the size of the Mercury’s audience, the relatively small number of protest letters, and the lack of reported damage caused by the panic—suggest that few people listened and even fewer believed. Anything else is just guesswork.” The “Myth of the War of the Worlds Panic” illustrates how academic research can fuel, and legitimize, misconceptions about misinformation. Much research has documented the role played by media and political discourse in building an alarmist narrative on misinformation (Anderson, 2021; Carlson, 2020; Jungherr & Schroeder, 2021; Mitchelstein et al., 2020). Here, we offer a critical review of academic discourse on misinformation, which exploded after the 2016 US presidential election with the publication of more than 2,000 scientific articles (Allen et al., 2020). We explain why overgeneralizations of scientific results and poor considerations of methodological limitations can lead to misconceptions about misinformation in the public sphere. Drawing on research questioning the use of big data in social sciences (Boyd & Crawford, 2012; Kitchin, 2014; Tufekci, 2014) and reception studies (Livingstone, 2019), we identify six common misconceptions about misinformation (i.e., statements often found in press articles and scientific studies but not supported by empirical findings). These misconceptions can be divided into two groups: one concerning misconceptions about the prevalence and circulation of misinformation, and one concerning misconceptions about the impact and reception of misinformation. Table 1 offers an overview of the misconceptions tackled in this article. Given that definitions of misinformation vary from one study to another, we use the term in its broadest sense, that is, as an umbrella term encompassing all forms of false or misleading information regardless of the intent behind it. Misconceptions About the Prevalence and Circulation of Misinformation Misinformation Is Just a Social Media Problem The internet drastically reduces the cost of accessing, producing, and sharing information. Social media reduces friction Table 1.  Frequently Encountered Misconceptions About Misinformation in Column Two, and Misinterpretation of Research Results Associated With These Misconceptions in Column Three. Misconceptions about misinformation Misinterpretations of research results Prevalence and circulation 1. Misinformation is just a social media problem Scientists focus on social media because it is methodologically convenient. Yet, misinformation in legacy media and offline networks is understudied. Social media make visible and quantifiable social phenomena that were previously difficult to observe 2. The internet is rife with misinformation Big numbers are the rule on the internet. The misinformation problem should be evaluated at the scale of the information ecosystem, e.g., by including news consumption and news avoidance in the equation 3. Falsehoods spread faster than the truth How (mis)information is defined influences the perceived scale of the problem and the solutions to fight it. Misinformation should not be framed only in terms of accuracy (true vs. false), it could also be framed in terms of harmfulness or ideological slant Impact and reception 4. People believe everything they see on the internet Prevalence should not be conflated with impact or acceptance. Sharing or liking is not believing. Digital traces do not always mean what we expect them to, and often, to fully understand them, fine-grained analyses are needed 5. A large number of people are misinformed Surveys overestimate people’s misperceptions or beliefs in conspiracy theories, and poorly measure rare behaviors. This is due to poor survey practices, but also the instrumental use that some participants make of these surveys (e.g., expressive responding) 6. Misinformation has a strong influence on people’s behavior People mostly consume misinformation they are predisposed to accept. Acceptance should not be conflated with attitude or behavioral change. People do not change their minds easily, let alone their behavior Altay et al. 3 even more: the cost of connecting with physically distant minds is lower than ever. The media landscape is no longer controlled by traditional gatekeepers, and misinformation, just like any other content, is easier to publish. Today’s wide access to rich digital traces makes contemporary large-scale issues like misinformation easier to study, which can give the impression that misinformation was less prevalent in the past (compared to reliable information). Alarmist headlines about the effect of social media and new technologies are widespread, such as “How technology disrupted the truth” (Viner, 2016) or “You thought fake news was bad? Deep fakes are where truth goes to die” (Schwartz, 2018). Yet, we need to be careful when comparing the big data sets that we have today with the much poorer and smaller data sets of the past. We should also resist the temptation of idealizing the past. There was no golden age in which people only believed and communicated true information (Nyhan, 2020). Conspiracy theories spread long before social media (Allcott & Gentzkow, 2017). Misinformation, such as false rumors, is a universal feature of human societies, not a modern exception (Acerbi, 2020). Before the advent of the internet, in the 70s, rumors about “sex thieves” arose from interactions between strangers in markets and spread throughout Africa (Bonhomme, 2016). In 1969, rumors about women being abducted and sold as sex slaves proliferated in the French city of Orleans (Morin, 1969). One cannot assume that misinformation is more common today simply because it is more available and measured. Social scientists focus on social media—Twitter in particular—because it is methodologically convenient (Tufekci, 2014). But active social media users are not representative of the general population (Mellon & Prosser, 2017), and most U.S. social media users (70%) say they rarely, or never, post about political or social issues (and only 9% say they often do so; Mcclain, 2021). Traditional media may play an important role in spreading misinformation, but because of the current focus on social media, little is known about misinformation in widely used sources of information such as television or radio. According to Google Scholar, in May 2022, only seven papers had both “fake news” and “television” in their title, compared to 578 with “fake news” and “social media” (the results are similar with misinformation, 6 vs 389). Television is a gateway to misinformation from elites, most notably from politicians (Benkler et al., 2018; Tsfati et al., 2020). For instance, Trump’s tweets in his final year of office were on cable news channels for a total of 32 h (Wardle, 2021). More broadly, the detrimental consequences of mainstream partisan media, where politicians routinely spread misinformation, are well documented (e.g., Broockman & Kalla, 2022). If political elites or partisan media did not wait for social media to spread misinformation, now they can leverage social media features (e.g., hashtags) to frame topics of discussion in online public debate and to impose their partisan agenda based (sometimes) on outright lies. These limitations invite more nuanced views on the role of social media in the misinformation problem and suggest that more work is needed on misinformation in legacy media and offline networks. The Internet is Rife With Misinformation During the 2016 US presidential campaign, the top 20 fake news stories on Facebook accumulated nearly nine million shares, reactions, and comments, between August 1 and November 8 (BuzzFeed, 2016). But what do nine million interactions mean on Facebook? If the 1.5 billion Facebook users in 2016 had commented, reacted, or shared just one piece of content per week, the nine million engagements with the top fake news stories would represent only 0.042% of all their actions during the period studied (Watts & Rothschild, 2017). According to one estimate, in the US, between 2019 and 2020, traffic to untrustworthy websites increased by 70%, whereas traffic to trustworthy websites increased by 47% (Majid, 2021), which led to the following headline: “Covid- 19 and the rise of misinformation and misunderstanding” (Majid, 2021). Yet, traffic to trustworthy websites is one order of magnitude higher than the traffic to untrustworthy websites. In March 2020, untrustworthy websites received 30 million additional views compared to March 2019, whereas traffic to trustworthy websites increased by two billion (Majid, 2021). These two examples highlight the need to zoom out of the big numbers surrounding misinformation and to put them in perspective with people’s information practices. The average US internet user spends less than 10 min a day consuming news online (Allen et al., 2020) and the average French internet user spends less than 5 min (Cordonier & Brest, 2021). News consumption represents only 14% of Americans’ media diet and 3% of the time French people spend on the internet (Cordonier & Brest, 2021). Where is misinformation in all that? It represents 0.15% of the American media diet and 0.16% of the French (Cordonier & Brest, 2021). In addition, misinformation consumption is heavily skewed: 61% of the French participants did not consult any unreliable sources during the 30 days of the study (Cordonier & Brest, 2021), and a small minority of people account for most of the misinformation consumed and shared online (Grinberg et al., 2019). Just as the internet is not rife with news, the internet is not rife with misinformation. When people want to inform themselves, they primarily access established news websites (Fletcher et al., 2018; Guess et al., 2020), or, more commonly, turn on the TV (Allen et al., 2020). Misinformation receives little online attention compared to reliable news, and, in turn, reliable news receives little online attention compared to everything else that people do (Cordonier & Brest, 2021). People do not use the internet primarily to be informed, but rather to connect with friends, shop, work, and more generally for entertainment (Cordonier & Brest, 2021). 4 Social Media + Society Documenting the large volume of interactions generated by misinformation is important, but it should not be taken as proof of its predominance. Big numbers are the rule on the internet, and to be properly understood, they must be contextualized within the entire information ecosystem. That being said, it is not always possible for researchers to consider the big picture since access to social media data is restricted and often partial. As a result, many research questions remain unexplored and conclusions are sometimes uncertain, even when private companies provide access to huge amounts of data. For instance, despite its unprecedented size, the URLs dataset released by Facebook, via Social Science One’s program, only includes URLs that have been publicly shared at least 100 times. This threshold of 100 public shares overestimates by a factor of four the prevalence of misinformation on Facebook (Allen et al., 2021). Despite our limited access to social media data, methodological improvements are possible. For example, even with partial data access, it is possible to conduct cross-platform studies, which are needed to obtain accurate estimates of misinformation prevalence and to quantify the diversity of people’s news diets (Cointet et al., 2021). Falsehoods Spread Faster than the Truth The idea that falsehoods spread faster than the truth was amplified and legitimized by an influential article published in Science which found, among others, that “it took the truth about six times as long as falsehood to reach 1,500 people” (Vosoughi et al., 2018, p. 3). The study was quickly covered in prominent news outlets and led to alarmist headlines such as “Want something to go viral? Make it fake news” (Fox, 2018). Yet, Vosoughi and his colleagues did not examine the spread of true and false news online but of “contested news” that fact-checkers classified as either true or false, leaving aside a large number of uncontested news that is extremely viral. For instance, the royal wedding between Prince Harry and Meghan Markle attracted 29 million viewers in the United States and generated at least 6.9 million interactions on Facebook (Berger, 2018). Similarly, the picture of Lionel Messi holding the 2022 world cup trophy generated 75 million likes on Instagram (and more than 2 million comments)—beating the record of likes on Instagram previously held by the photo of an egg with 59 million likes. The authors themselves publicly explained this sampling bias.1 Yet, journalists and scientists quickly overgeneralized, and nuances were lost. Subsequent studies found conflicting results, with science-based evidence and fact-checking being more retweeted than false information during the COVID-19 pandemic (Pulido et al., 2020) and news from hyperpartisan sites not disseminating more quickly than mainstream news on Australian Twitter (Bruns & Keller, 2020). Moreover, a cross-platform study found no difference in the spreading pattern of news from reliable and questionable sources (Cinelli et al., 2020). This is not to say that the Vosoughi et al. findings on fact-checked news do not hold, but rather that how information and misinformation are defined greatly influences the conclusions we draw. Misinterpretations of that article raise the following questions: What categories should we use? Do dichotomous measures of veracity make sense? How should we define misinformation? These past few years, important efforts have been made to refine the blurred concept of “fake news” (for a review, see Tandoc et al., 2018). In practice, though, researchers use the ready-made categories constructed by fact-checkers, putting under the same heading content that differs in terms of harmfulness or facticity. How (mis)information is defined influences the perceived scale of the problem and the solutions to fight it (Rogers, 2020). For instance, in 2016, a widely circulated BuzzFeed article showed that fake news outperformed true news on Facebook in terms of engagement before the U.S. presidential election (Silverman, 2016). However, when excluding hyperpartisan news (such as Fox News) from the fake news category, reliable news largely outperformed fake news (Rogers, 2020). Politically biased information that is not false could have harmful effects (Longhi, 2021), but does it belong in the misinformation category? As Rogers (2021) explains, “stricter definitions of misinformation (imposter sites, pseudo-science, conspiracy, extremism only) lessen the scale of the problem, while roomier ones (adding “hyperpartisan” and “junk” sites serving clickbait) increase it, albeit rarely to the point where it outperforms non-problematic (or more colloquially, mainstream) media” (p. 2). Of course, it would be impossible to find a unanimous typology that would put every piece of news in the “right” category, but it is important to be mindful of the limitations of the categories we use. In the first part of this article, we have highlighted three misconceptions about the prevalence and circulation of misinformation. We encourage future research to broaden the scope of misinformation studies by (1) putting misinformation consumption into perspective with broader people’s information practices, as well as (2) all their online activities, and (3) moving away from blatantly false information to study subtler forms of information disorders. (See Figure 1. From the innermost circle outward: factchecked false news; broader forms of misinformation, including, e.g., partisan interpretations; all online information; all information, including offline.) Misconceptions About the Impact and Reception of Misinformation In the second part of this article, we examine misconceptions about the impact and reception of misinformation. It is not because the numbers generated by misinformation are small compared to reliable information or non-news content that we can dismiss them. Small numbers can make a difference at a societal level, but to accurately assess their impact and how Altay et al. 5 they translate into attitudes and behaviors, we need to understand what they mean. Until now, studies on misinformation have been dominated by experimental approaches, self-report surveys, and big data methods reducing the reception of misinformation to mere stimulus and response mechanisms, while failing to capture audience agency and contexts. Drawing on an active audience approach, the following sections explain why certain findings from quantitative studies can lead to misconceptions about the impact and reception of misinformation (see Figure 2). People Believe Everything They See on the Internet Media effects are increasingly measured via big data methods (Athique, 2018, p. 59) neglecting the agency of people and paying little attention to communicative context (Livingstone, 2019). As danah Boyd and Kate Crawford (2012, p. 666) put it, “why people do things, write things, or make things can be lost in the sheer volume of numbers.” For instance, it is tempting to conflate prevalence with impact, but the diffusion of inaccurate information should be distinguished from its reception. People are more likely to share true and false news they perceive as more accurate (e.g., Altay et al., 2021), but there is also a well-documented disconnect between sharing intentions and accuracy judgments (Altay et al., 2021). Sharing and liking are not believing. People interact with misinformation for a variety of reasons: to socialize, to express skepticism, outrage or anger, to signal group membership, or simply to have a good laugh (Acerbi, 2019; Berriche & Altay, 2020; Metzger et al., 2021; Osmundsen et al., 2021). Engagement metrics are an imperfect proxy of information reception and are not sufficient to estimate the impact of misinformation. As Wagner and Boczkowski (2019, p. 881) note, “studies that measure exposure to fake news seem sometimes to easily infer acceptance of content due to mere consumption of it, therefore perhaps exaggerating potential negative effects [. . .] future studies should methodologically account for critical or ironic news sharing, as well as other forms of negotiation and resignification of false content.” With the datafication of society, scientists’ attention shifted from empirical audiences to the digital traces that they leave. However, we should not forget the people who left these traces in the digital record, nor neglect the rich theoretical framework produced by decades of reception studies based on in situ observations of people consuming media content (Lull, 1988; Morley, 1980). People are not passive receptacles of information. They are active, interpretative, and domesticate technologies in complex and unexpected ways (Livingstone, 2019). People are more skeptical than gullible when browsing online (Fletcher & Nielsen, Figure 1.  Contextualizing misinformation prevalence and circulation. 6 Social Media + Society 2019). Trust in the media is low, but trust in information encountered on social media is even lower. And people deploy a variety of strategies to detect and counter misinformation, such as checking different sources or turning to factchecks (Wagner & Boczkowski, 2019). People often disqualified as “irrational” or “gullible” in public discourse, such as “conspiracy theorists” or “anti-vaxxers,” deploy sophisticated verification strategies to “fact-check” the news in their own way (Tripodi, 2018) and produce “objectivist counter-expertise” (Ylä-Anttila, 2018) by doing their “own research” (Marwick & Partin, 2022). For instance, a recent in-depth analysis of around 15,000 comments, based on mixmethods, showed how “anti-vaxxers” cite scientific studies on Facebook groups to support their positions and challenge the objectivity of mainstream media (Berriche, 2021). Given people’s skepticism toward information encountered online and the low prevalence of misinformation in their media diet, interventions aimed at reducing the acceptance of misinformation are bound to have smaller effects than interventions increasing trust in reliable sources of information (Acerbi et al., 2022). More broadly, enhancing trust in reliable sources should be a priority over fostering distrust in unreliable sources (Altay, 2022). A Large Number of People are Misinformed Headlines about the ubiquity of misbeliefs are rampant in the media and are most often based on surveys. But how well do surveys measure misbeliefs? Luskin and colleagues (2018) analyzed the design of 180 media surveys with closed-ended questions measuring belief in misinformation. They found that more than 90% of these surveys lacked an explicit “Don’t know” or “Not sure” option and used formulations encouraging guessing such as “As far as you know . . .,” or “Would you say that . . .” Often, participants answer these questions by guessing the correct answer and report holding beliefs that they did not hold before the survey (Graham, 2021). Not providing, or not encouraging “Don’t know” answers is known to increase guessing even more (Luskin & Bullock, 2011). Guessing would not be a major issue if it only added noise to the data. To find out, Luskin and colleagues (2018) tested the impact of not providing “Don’t know” answers and encouraging guessing on the prevalence of misbeliefs. They found that it overestimates the proportion of incorrect answers by nine percentage points (25 to 16), and, when considering only people who report being confident in holding a misperception, it overestimates incorrect answers by 20 percentage points (25 Figure 2.  Contextualizing misinformation effect and reception. Altay et al. 7 to 5). In short, survey items measuring misinformation overestimate the extent to which people are misinformed, eclipsing the share of those who are simply uninformed. In the same vein, conspiratorial beliefs are notoriously difficult to measure and surveys tend to exaggerate their prevalence (Clifford et al., 2019). For instance, participants in survey experiments display a preference for positive response options (yes vs no, or agree vs disagree) which inflates agreement with statements, including conspiracy theories, by up to 50% (Hill & Roberts, 2021; Krosnick, 2018). Moreover, the absence of “Don’t know” options, together with the impossibility to express one’s preference for conventional explanations in comparison to conspiratorial explanations, greatly overestimate the prevalence of con- spiratorialbeliefs(Cliffordet al.,2019).Thesemethodological problems contributed to unsupported alarmist narratives about the prevalence of conspiracy theories, such as Qanon going mainstream (Uscinski et al., 2022a). Moreover, the misperceptions that surveys measure are skewed toward politically controversial and polarizing misperceptions, which are not representative of the misperceptions that people actually hold (Nyhan, 2020). This could contribute to fueling affective polarization by emphasizing differences between groups instead of similarities and inflate the prevalence of misbeliefs. When misperceptions become group markers, participants use them to signal group membership—whether they truly believe the misperceptions or not (Bullock et al., 2013). Responses to factual questions in survey experiments are known to be vulnerable to “partisan cheerleading” (Bullock et al., 2013; Prior et al., 2015), in which, instead of stating their true beliefs, participants give politically congenial responses. Quite famously, a large share of Americans believed that Donald Trump’s inauguration in 2017 was more crowded than Barack Obama’s in 2009, despite being presented with visual evidence to the contrary. Partisanship does not directly influence people’s perceptions: misperceptions about the size of the crowds were largely driven by expressive responding and guessing. Respondents who supported President Trump “intentionally provide misinformation” to reaffirm their partisan identity (Schaffner & Luks, 2018, p. 136). The extent to which expressive responding contributes to the overestimation of other political misbeliefs is debated (Nyhan, 2020), but it is probably significant. Solutions have been proposed to overcome these flaws and measure misbeliefs more accurately, such as including confidence-in-knowledge measures (Graham, 2020) and considering only participants who firmly and confidently say they believe misinformation items as misinformed (Luskin et al., 2018). Yet, even when people report confidently holding misbeliefs, these misbeliefs are highly unstable across time, much more so than beliefs (Graham, 2021). For instance, the responses of people saying they are 100% certain that climate change is not occurring have the same measurement properties as responses of people saying they are 100% certain the continents are not moving or that the sun goes around the Earth (Graham, 2021). A participant’s response at time T does not predict their answer at time T + 1. In other words, flipping a coin would give a similar response pattern. So far, we have seen that even well-designed surveys overestimate the prevalence of misbeliefs. A further issue is that surveys unreliably measure exposure to misinformation and the occurrence of rare events such as fake news exposure. People report being exposed to a substantial amount of misinformation and recall having been exposed to particular fake headlines (Allcott & Gentzkow, 2017). To estimate the reliability of these measures, Allcott and Gentzkow (2017) showed participants the 14 most popular fake news during the American election campaign, together with 14 made-up “placebo fake news.” 15% of participants declared having been exposed to one of the 14 “real fake news,” but 14% also declared having been exposed to one of the 14 “fake news placebos.” During the pandemic, many people supposedly engaged in extremely dangerous hygiene practices to fight COVID- 19 because of misinformation encountered on social media, such as drinking diluted bleach (Islam et al., 2020). This led to headlines such as “COVID-19 disinformation killed thousands of people, according to a study” (Paris Match Belgique, 2020). Yet, the study is silent regarding causality, and cannot be taken as evidence that misinformation had a causal impact on people’s behavior (France info, 2020). For instance, 39% of Americans reported having engaged in at least one cleaning practice not recommended by the CDC, 4% ofAmericans reported drinking or gargling a household disinfectant, while another 4% reported drinking or gargling diluted bleach (Gharpure et al., 2020). These percentages should not be taken at face value. A replication of the survey found that these worrying responses are entirely attributable to problematic respondents who also reported “recently having had a fatal heart attack” or “eating concrete for its iron content” at a rate similar to that of ingesting household cleaners (Litman et al., 2020; reminiscent of the “lizardman’s constant” by Alexander, 2013). The authors conclude that “Once inattentive, mischievous, and careless respondents are taken out of the analytic sample we find no evidence that people ingest cleansers to prevent Covid-19 infection” (Alexander, 2013, p. 1). This is not to say that COVID-19 misinformation had no harmful effects (such as creating confusion or eroding trust in reliable information), but rather that surveys using self-reported measures of rare and dangerous behaviors should be interpreted with caution. Misinformation Has a Strong Influence on People’s Behavior Sometimes, people believe what they see on the internet and engagement metrics do translate into belief. Yet, even when misinformation is believed, it does not necessarily mean that 8 Social Media + Society it changed anyone’s mind or behavior. First, people largely consume politically congenial misinformation (Guess et al., 2019, 2021). That is, they consume misinformation they already agree with, or are predisposed to accept. Congenial misinformation “preaches to the choir” and is unlikely to have drastic effects beyond reinforcing previously held beliefs. Second, even when misinformation changes people’s minds and leads to the formation of new (mis)beliefs, it is not clear if these (mis)beliefs ever translate into behaviors. Attitudes are only weak predictors of behaviors. This problem is well known in public policies as the value-action gap (Kollmuss & Agyeman, 2002). Most notoriously, people report being increasingly concerned about the environment without adjusting their behaviors accordingly (Landry et al., 2018). Common misbeliefs, such as conspiracy theories, are likely to be cognitively held in such a way that limits their influence on behaviors (Mercier, 2020; Mercier & Altay, 2022). For instance, the behavioral consequences that follow from common misbeliefs are often at odds with what we would expect from people actually believing them. As Jonathan Kay (2011, p. 185) noted, “one of the great ironies of the Truth movement is that its activists typically hold their meetings in large, unsecured locations such as college auditoriums—even as they insist that government agents will stop at nothing to protect their conspiracy for world domination from discovery.” Often, these misbeliefs are likely to be post hoc rationalizations of pre-existing attitudes, such as distrust of institutions. In the real world, it is difficult to measure how much attitude change misinformation causes, and it is a daunting task to assess its impact on people’s behavior. Surveys relying on correlational data tell us little about causation. For example, belief in conspiracy theories is associated with many costly behaviors, such as COVID-19 vaccine refusal (Uscinski et al., 2022b). Does this mean that vaccine hesitancy is caused by conspiracy theories? No, it could be that both vaccine hesitancy and belief in conspiracy theories are caused by other factors, such as low trust in institutions (Mercier & Altay, 2022; Uscinski et al., 2022b). A few ingenious studies allowed some causal inferences to be drawn. For instance, Kim and Kim (2019) used a longitudinal survey to capture people’s beliefs and behaviors both before and after the diffusion of the “Obama is a Muslim” rumor. They found that after the rumor spread, more people were likely to believe that Obama was a Muslim. Yet, this effect was “driven almost entirely by those predisposed to dislike Obama” (p. 307), and the diffusion of the rumor had no measurable effect on people’s intention to vote for Obama. This should not come as a surprise, considering that even political campaigns and political advertising only have weak and indirect effects on voters (Kalla & Broockman, 2018). As David Karpf (2019) writes “Generating social media interactions is easy; mobilizing activists and persuading voters is hard.” The idea that exposure to misinformation (or information) has a strong and direct influence on people’s attitudes and behaviors comes from a misleading analogy of social influence according to which ideas infect human minds like viruses infect human bodies. Americans did not vote for Trump in 2016 because they were brainwashed. There is no such thing as “brainwashing” (Mercier, 2020). Information is not passed from brain to brain like a virus is passed from body to body. When humans communicate, they constantly reinterpret the messages they receive, and modify the ones they send (Claidière et al., 2014). The same tweet will create very different mental representations in each brain that reads it, and the public representations people leave behind them, in the form of digital traces, are only an imperfect proxy of their private mental representations. The virus metaphor, all too popular during the COVID-19 pandemic—think of the “infodemic” epithet—is misleading (Simon & Camargo, 2021). It is reminiscent of outdated models of communication (e.g., “hypodermic needle model”) assuming that audiences were passive and easily swayed by pretty much everything they heard or read (Lasswell, 1927). As Anderson (2021) notes “we might see the role of Facebook and other social media platforms as returning us to a pre-Katz and Lazarsfeld era, with fears that Facebook is “radicalizing the world” and that Russian bots are injecting disinformation directly in the bloodstream of the polity.” These premises are at odds with what we know about human psychology and clash with decades of data from communication studies. Conclusion We identified six misconceptions about the prevalence and impact of misinformation and examined the conceptual and methodological challenges they raise. First, social media makes the perfect villain; but before blaming it for the misinformation problem, more work needs to be done on legacy media and offline networks. Second, the misinformation problem should be evaluated at the scale of the information ecosystem, for example, by including news consumption and news avoidance in the equation. Third, we should be mindful of the categories that we use, like “fake” versus “true” news since they influence our results and their practical implications. Fourth, more qualitative and quantitative reception studies are needed to understand people’s informational practices. Digital traces do not always mean what we expect them to, and often, to fully understand them, fine-grained analyses are needed. Fifth, the quantity of misinformed people is likely to be overestimated. Surveys measuring misbeliefs should include “Don’t know” or “Not sure” options and avoid wording that encourages guessing. Sixth, we should resist monocausal explanations and blaming misinformation for complex socioeconomic problems. People mostly consume information they are predisposed to accept; this acceptance should not be conflated with attitude or behavioral Altay et al. 9 change. More broadly, conclusions drawn from engagement metrics, online experiments, or surveys need to be taken with a grain of salt, as they tend to overestimate the prevalence of misbeliefs, and tell us little about the causal influence of misinformation and its reception. In the lines below, we detail practical avenues for future research to overcome some of these challenges. We need to move away from fake news and blatantly false information as their prevalence and effects are likely minimal. Instead, we should investigate subtler forms of misinformation that produce biased perceptions of reality. For instance, partisan media use true information in misleading ways by selectively reporting on some issues but not others and framing them in a politically congruent manner (Broockman & Kalla, 2022). We need to better interpret data collected via computational methods or self-report surveys. So far, misinformation studies have been dominated by big data methods or experimental approaches. This research could benefit from being completed by active audience research (Livingstone, 2019). For instance, instead of focusing on media effects where participants are often implicitly depicted as passive, it would be more fruitful to study the different ways people use the information they consume online. Similarly, digital ethnographies are needed to understand the reception of misinformation in our complex digital ecosystem. We need to rely on more complex models of influence. Metaphors like “contagion” and “infodemic” provide an overly simplified—and incorrect—view of how influence works (Simon & Camargo, 2021). People do not change their minds when exposed to “fake news” on social media. Social media does not turn people into conspiracy theorists. Individual predispositions, such as conspiracy mentality, mediate the relationship between social media use and belief in conspiracy theories (Uscinski et al., 2022b). These mediating variables, which explain for instance why some people but not others visit untrustworthy websites, should be investigated further. We need to ask better research questions. The misconceptions outlined above (mis)guided some research agenda by focusing excessively on why people “fall for fake news” or on how to make people more skeptical. Yet, most people do not fall for fake news and are (overly) skeptical of news on social media. This excessive focus on biases, common in psychology, left some pressing questions unanswered. For example, we know very little about the cognitive and social tools that people use to verify information online. Future studies should move from a perspective centered on the identification of isolated failures of individual cognition to an approach attentive to audience critical skills. Our digital environment empowered us with resources such as fact-checking websites, search engines and Wikipedia, making it easier than ever before to verify content encountered online and offline. Yet, research on the uses (and misuses) of these new information search and verification tools is drastically lacking. The study of misconceptions does not entail a focus on misinformation. People commonly hold false beliefs and engage in detrimental behaviors because they do not trust institutions and choose not to inform themselves. Instead of focusing on the small number of people who consume news from unreliable sources, it would be more fruitful to focus on the large share of people who are overly skeptical of reliable sources and rarely consume any news (Allen et al., 2020; Cordonier & Brest, 2021). Despite legitimate concerns about misinformation, people are more likely to be uninformed than misinformed (Li & Wagner, 2020), even during the COVID-19 pandemic (Cushion et al., 2020). To appropriately understand and fight misinformation, it is crucial to have these conceptual and methodological blind spots in mind. Just like misinformation, misinformation on misinformation could have deleterious effects (Altay et al., 2020; Jungherr & Schroeder, 2021; Miró-Llinares & Aguerri, 2021; Van Duyn & Collier, 2019), such as diverting society’s attention and resources from deeper socioeconomic issues or fueling people’s mistrust of the media even more (Altay & Acerbi, 2022). Misinformation is mostly a symptom of deeper sociopolitical problems rather than a cause of these problems (Ramaciotti Morales et al., 2022). Fighting the symptoms can help, but it should not divert us from the real causes, nor overshadow the need to fight for access to accurate, transparent, and quality information. Acknowledgements We would like to thank Dominique Cardon, Héloïse Théro, and Shaden Shabayek for proofreading our manuscript and for their precious feedbacks and corrections. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Project De Facto Observatory of Information x Connecting Europe Facility-2020-FR-IA-0291 Reboot Foundation x. ORCID iD Manon Berriche https://orcid.org/0000-0003-1381-8330 Note 1. For instance, see Sinan Aral’s tweet: https://twitter.com/ sinanaral/status/974276513741844480. References Acerbi, A. (2019). Cognitive attraction and online misinformation. Palgrave Communications, 5(1), 15. Acerbi, A. (2020). Cultural evolution in the digital age. Oxford University Press. 10 Social Media + Society Acerbi, A., Altay, S., & Mercier, H. (2022). Research note: Fighting misinformation or fighting for information? Harvard Kennedy School (HKS) Misinformation Review. https://doi. org/10.37016/mr-2020-87 Alexander, S. (2013). Lizardman’s constant is 4%. Slate Star Codex. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211 Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science Advances, 6(14), eaay3539. https://doi.org/10.1126/sciadv.aay3539 Allen, J., Mobius, M., Rothschild, D. M., & Watts, D. J. (2021). Research note: Examining potential bias in large-scale censored data. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-74 Altay, S. (2022). How effective are interventions against misinformation? https://doi.org/10.31234/osf.io/sm3vk Altay, S., & Acerbi, A. (2022). Misinformation is a threat because (other) people are gullible. https://doi.org/10.31234/osf.io/n4qrj Altay, S., de Araujo, E., & Mercier, H. (2021). “If this account is true, it is most enormously wonderful”: Interestingness-if-true and the sharing of true and false news. Digital Journalism, 10, 373–394. https://doi.org/10.1080/21670811.2021.1941163 Altay, S., Hacquin, A.-S., & Mercier, H. (2020). Why do so few people share fake news? It hurts their reputation. New Media & Society, 24, 1303–1324. https://doi.org/10.1177/1461444820969893 Altay, S., Nielsen, R. K., & Fletcher, R. (2022). Quantifying the “infodemic”: People turned to trustworthy news outlets during the 2020 coronavirus pandemic. Journal of Quantitative Description: Digital Media, 2. https://reutersinstitute.politics. ox.ac.uk/quantifying-infodemic-people-turned-trustworthy- news-outlets-during-2020-coronavirus-pandemic Anderson, C. W. (2021). Fake news is not a virus: On platforms and their effects. Communication Theory, 31(1), 42–61. https://doi. org/10.1093/ct/qtaa008 Athique, A. (2018). The dynamics and potentials of big data for audience research. Media, Culture & Society, 40(1), 59–74. https://doi.org/10.1177/0163443717693681 Auxier, B. (2020). 64% of Americans say social media have a mostly negative effect on the way things are going in the U.S. today. Pew Research Center. https://www.pewresearch.org/ fact-tank/2020/10/15/64-of-americans-say-social-media-have- a-mostly-negative-effect-on-the-way-things-are-going-in-the- u-s-today/ Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press. Bennett, W. L., & Livingston, S. (Eds.). (2020). The disinformation age: Politics, technology, and disruptive communication in the United States. Cambridge University Press. https://doi. org/10.1017/9781108914628 Berger, S. (2018). Here’s how many Americans watched Prince Harry and Meghan Markle’s royal wedding. CNBC. https:// www.cnbc.com/2018/05/21/ratings-for-prince-harry-and- meghan-markle-royal-wedding.html Berriche, M. (2021). In search of sources. Politiques de Communication, 1, 115–154. https://www.cairn.info/revue- politiques-de-communication-2021-1-page-115.htm Berriche, M., & Altay, S. (2020). Internet users engage more with phatic posts than with health misinformation on Facebook. Palgrave Communications, 6, 71. https://doi.org/10.1057/ s41599-020-0452-1 Bonhomme, J. (2016). The Sex Thieves. The Anthropology of a Rumor. Média Diffusion. Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. Broockman, D., & Kalla, J. (2022). The manifold effects of partisan media on viewers’ beliefs and attitudes: A field experiment with Fox News viewers. OSF Preprints, 1. https://osf.io/jrw26/ Bruns, A., & Keller, T. (2020). News diffusion on Twitter: Comparing the dissemination careers for mainstream and marginal news. Social Media & Society. https://snurb.info/node/2601 Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2013). Partisan bias in factual beliefs about politics. National Bureau of Economic Research. http://www.nber.org/papers/w19080 BuzzFeed. (2016). Here are 50 of the biggest fake news hits on Facebook from 2016. https://www.buzzfeednews.com/article/ craigsilverman/top-fake-news-of-2016 Cantril, H., Gaudet, H., & Herzog, H. (1940). The invasion from Mars. Princeton University Press. Carlson, M. (2020). Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Information, Communication & Society, 23(3), 374–388. Cinelli, M., Quattrociocchi, W., Galeazzi, A., Valensise, C. M., Brugnoli, E., Schmidt, A. L., Zola, P., Zollo, F., & Scala, A. (2020). The COVID-19 social media infodemic. Scientific Reports, 10(1), 1–10. Claidière, N., Scott-Phillips, T. C., & Sperber, D. (2014). How Darwinian is cultural evolution? Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1642), 20130368. http://rstb.royalsocietypublishing.org/content/369/ 1642/20130368.short Clifford, S., Kim, Y., & Sullivan, B. W. (2019). An improved question format for measuring conspiracy beliefs. Public Opinion Quarterly, 83(4), 690–722. Cohen, S. (1972). Folk devils and moral panics. Routledge. https:// doi.org/10.4324/9780203828250 Cointet, J.-P., Cardon, D., Mogoutov, A., Ooghe-Tabanou, B., Plique, G., & Morales, P. (2021). Uncovering the structure of the French media ecosystem. http://arxiv.org/abs/2107.12073 Cordonier, L., & Brest, A. (2021). How do the French inform themselves on the Internet? Analysis of online information and disinformation behaviors. Fondation Descartes. https://hal. archives-ouvertes.fr/hal-03167734/document Cushion, S., Soo, N., Kyriakidou, M., & Morani, M. (2020). Research suggests UK public can spot fake news about COVID-19, but don’t realise the UK’s death toll is far higher than in many other countries. LSE COVID- Blog. https://blogs. lse.ac.uk/covid19/2020/04/28/research-suggests-uk-public- can-spot-fake-news-about-covid-19-but-dont-realise-the-uks- death-toll-is-far-higher-than-in-many-other-countries/ Fletcher, R., Cornia, A., Graves, L., & Nielsen, R. K. (2018). Measuring the reach of “fake news” and online disinformation in Europe. Reuters Institute Factsheet. https://reutersinstitute. politics.ox.ac.uk/our-research/measuring-reach-fake-news- and-online-disinformation-europe Altay et al. 11 Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism: How people navigate news on social media. Information, Communication & Society, 22(12), 1751–1769. Fox, M. (2018). Fake News: Lies spread faster on social media than truth does. NBC News. https://www.nbcnews.com/health /health-news/fake-news-lies-spread-faster-social-media-truth- does-n854896 France info. (2020). Désintox. Non, la désinformation n’a pas fait 800 morts. France Info. https://www.francetvinfo.fr/sante/ maladie/coronavirus/desintox-non-la-desinformation-n-a- pas-fait-800-morts_4109117.html Gharpure, R., Hunter, C. M., Schnall, A. H., Barrett, C. E., Kirby, A. E., Kunz, J., Berling, K., Mercante, J. W., Murphy, J. L., & Garcia-Williams, A. G. (2020). Knowledge and practices regarding safe household cleaning and disinfection for COVID-19 prevention—United States, May 2020. Morbidity and Mortality Weekly Report, 69(23), 705–709. Graham, M. H. (2020). Self-awareness of political knowledge. Political Behavior, 42(1), 305–326. Graham, M. H. (2021). Measuring misperceptions? American Political Science Review. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on twitter during the 2016 US Presidential election. Science, 363(6425), 374–378. https://doi. org/10.1126/science.aau2706 Guess, A., Aslett, K., Tucker, J., Bonneau, R., & Nagler, J. (2021). Cracking open the news feed: Exploring what us Facebook users see and share with large-scale platform data. Journal of Quantitative Description: Digital Media, 1. https://doi. org/10.51685/jqd.2021.006 Guess, A., Lockett, D., Lyons, B., Montgomery, J. M., Nyhan, B., & Reifler, J. (2020). “Fake news” may have limited effects beyond increasing beliefs in false claims. Harvard Kennedy School Misinformation Review, 1(1). https://doi.org/10.37016/ mr-2020-004 Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586. https://doi. org/10.1126/sciadv.aau4586ex Hill, S. J., & Roberts, M. E. (2021). Acquiescence bias inflates estimates of conspiratorial beliefs and political misperceptions. http://www.margaretroberts.net/wp-content/uploads/2021/10/ hillroberts_acqbiaspoliticalbeliefs.pdf Islam, M. S., Sarkar, T., Khan, S. H., Kamal, A.-H. M., Hasan, S. M., Kabir, A., Yeasmin, D., Islam, M. A., Chowdhury, K. I. A., & Anwar, K. S. (2020). COVID-19–related infodemic and its impact on public health: A global social media analysis. The American Journal of Tropical Medicine and Hygiene, 103(4), 1621. Jungherr, A., & Schroeder, R. (2021). Disinformation and the structural transformations of the public arena: Addressing the actual challenges to democracy. Social Media+ Society, 7(1), 2056305121988928. Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact in general elections: Evidence from 49 field experiments. American Political Science Review, 112(1), 148–166. https://doi.org/10.1017/S0003055417000363 Karpf, D. (2019). On digital disinformation and democratic myths. Mediawell (Social Science Research Council). Kay, J. (2011). Among the Truthers: A journey through America’s growing conspiracist underground. Harper Collins. Kim, J. W., & Kim, E. (2019). Identifying the effect of political rumor diffusion using variations in survey timing. Quarterly Journal of Political Science, 14(3), 293–311. http://dx.doi. org/10.1561/100.00017138 Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 2053951714528481. https:// doi.org/10.1177/2053951714528481 Kollmuss, A., & Agyeman, J. (2002). Mind the gap: Why do people act environmentally and what are the barriers to pro-environmental behavior? Environmental Education Research, 8(3), 239–260. Krosnick, J. A. (ed) (2018). Questionnaire design. In The Palgrave handbook of survey research (pp. 439-455). Palgrave Macmillan. Landry, N., Gifford, R., Milfont, T. L., Weeks, A., & Arnocky, S. (2018). Learned helplessness moderates the relationship between environmental concern and behavior. Journal of Environmental Psychology, 55, 18–22. Lasswell, H. D. (1927). The theory of political propaganda. The American Political Science Review, 21(3), 627–631. https:// doi.org/10.2307/1945515 Li, J., & Wagner, M. W. (2020). The value of not knowing: Partisan cue-taking and belief updating of the uninformed, the ambiguous, and the misinformed. Journal of Communication, 70(5), 646–669. Litman, L., Rosen, Z., Ronsezweig, C., Weinberger, S. L., Moss, A. J., & Robinson, J. (2020). Did people really drink bleach to prevent COVID-19? A tale of problematic respondents and a guide for measuring rare events in survey data. Medrxiv. https://doi.org/10.1101/2020.12.11.20246694 Livingstone, S. (2019). Audiences in an age of datafication: Critical questions for media research. https://jour- nals.sagepub.com/doi/full/10.1177/1527476418811118? casa_token=cRGn8e2DIJ0AAAAA%3A1ym-F7rXnWx- w5MsEwNZxjdKQi94zcMjqHc_dI0W9E8qoC4sQbfsnIv- LMVuqojtxjlbv3K_b956A Longhi, J. (2021). Mapping information and identifying disinformation based on digital humanities methods: From accuracy to plasticity. Digital Scholarship in the Humanities, 36, 980–998. https://doi.org/10.1093/llc/fqab005 Lull, J. (1988). World families watch television. SAGE. Luskin, R. C., & Bullock, J. G. (2011). “Don’t know” means “don’t know”: DK responses and the public’s level of political knowledge. The Journal of Politics, 73(2), 547–557. Luskin, R. C., Sood, G., Park, Y. M., & Blank, J. (2018). Misinformation about misinformation? Of headlines and survey design. Unpublished Manuscript. http://www.gsood.com/ research/papers/misinformation_misinformation.pdf Majid, A. (2021). Covid-19 and the rise of misinformation and misunderstanding. PressGazette. https://www.pressgazette.co.uk/ covid-19-rise-in-news-misinformation-data/ Marwick, A. E., & Partin, W. C. (2022). Constructing alternative facts: Populist expertise and the QAnon conspiracy. New Media & Society. Advance online publication. https://doi. org/10.1177/14614448221090201 Marwick, A. E. (2008). To catch a predator? The MySpace moral panic. First Monday, 13(6). https://doi.org/10.5210/ fm.v13i6.2152 12 Social Media + Society Mcclain, C. (2021). 70% of U.S. social media users never or rarely post or share about political, social issues. https://www.pewre- search.org/fact-tank/2021/05/04/70-of-u-s-social-media-users- never-or-rarely-post-or-share-about-political-social-issues/ Mellon, J., & Prosser, C. (2017). Twitter and Facebook are not representative of the general population: Political attitudes and demographics of British social media users. Research & Politics, 4(3), 2053168017720008. https://doi. org/10.1177/2053168017720008 Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe. Princeton University Press. Mercier, H., & Altay, S. (2022). Do cultural misbeliefs cause costly behavior? In Musolino, J., Hemmer, P. & Sommer, J. (Eds.), The science of beliefs. https://www.cambridge. org/core/books/abs/cognitive-science-of-belief/do-cultural- misbeliefs-cause-costly-behavior/C09911219EEEF54 A3CCBB939EC238D3D Metzger, M. J., Flanagin, A. J., Mena, P., Jiang, S., & Wilson, C. (2021). From dark to light: The many shades of sharing misinformation online. Media and Communication, 9, 3409. Miró-Llinares, F., & Aguerri, J. C. (2021). Misinformation about fake news: A systematic critical review of empirical studies on the phenomenon and its status as a “threat.” European Journal of Criminology. Advance online publication. https:// doi.org/10.1177/1477370821994059 Mitchell, A., Gottfried, J., Stocking, G., Walker, M., & Fedeli, S. (2019). Many Americans say made-up news is a critical problem that needs to be fixed. Pew Research Center’s Journalism Project. https://www.journalism.org/2019/06/05/many-amer- icans-say-made-up-news-is-a-critical-problem-that-needs-to- be-fixed/ Mitchelstein, E., Matassi, M., & Boczkowski, P. J. (2020). Minimal effects, maximum panic: Social media and democracy in Latin America. Social Media + Society, 6(4), 2056305120984452. https://doi.org/10.1177/2056305120984452 Morin, E. (1969). The rumor of Orleans. Le Seuil. Morley, D. (1980). The nationwide audience. British Film Institute. Newman, N., Fletcher, R., Schulz, A., Andı, S., & Nielsen, R. K. (2020). Reuters institute digital news report 2020. https:// reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-06/ DNR_2020_FINAL.pdf Nyhan, B. (2020). Facts and myths about misperceptions. Journal of Economic Perspectives, 34(3), 220–236. Orben, A. (2020). The Sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372 Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review, 115, 999–1015. https://doi.org/10.1017/S0003055421000290 Paris Match Belgique. (2020). Misinformation about coronavirus responsible for thousands of deaths, study finds. Prior, M., Sood, G., & Khanna, K. (2015). You cannot be serious: The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science, 10(4), 489–518. Pulido, C. M., Villarejo-Carballido, B., Redondo-Sama, G., & Gómez, A. (2020). COVID-19 infodemic: More retweets for science-based information on coronavirus than for false information. International Sociology, 35(4), 377–392. Ramaciotti Morales, P., Berriche, M., & Cointet, J.-P. (2022). The geometry of misinformation: Embedding Twitter networks of users who spread fake news in geometrical opinion spaces. https://hal.archives-ouvertes.fr/hal-03725556 Rogers, R. (2020). The scale of Facebook’s problem depends upon how “fake news” is classified. Harvard Kennedy School Misinformation Review, 1, 1–15. Rogers, R. (2021). Marginalizing the mainstream: How social media privilege political information. Frontiers in Big Data. Advance online publication. https://doi.org/10.3389/fdata.2021.689036 Schaffner, B. F., & Luks, S. (2018). Misinformation or expressive responding? What an inauguration crowd can tell us about the source of political misinformation in surveys. Public Opinion Quarterly, 82(1), 135–147. Schwartz, A. B. (2015). Broadcast hysteria: Orson Welles’s War of the Worlds and the art of fake news. Macmillan. Schwartz, O. (2018, November 12). You thought fake news was bad? Deep fakes are where truth goes to die. The Guardian. https://www.theguardian.com/technology/2018/nov/12/deep- fakes-fake-news-truth Silverman, C. (2016, November 16). This analysis shows how viral fake election news stories outperformed real news on Facebook. Buzzfeed News. https://www.buzzfeednews.com/article/craig- silverman/viral-fake-election-news-outperformed-real-news- on-facebook Simon, F. M., & Camargo, C. (2021). Autopsy of a metaphor: The origins, use, and blind spots of the “infodemic..” New Media & Society. Advance online publication. https://doi. org/10.1177/14614448211031908 Tandoc Jr, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. Tripodi, F. (2018). Searching for alternative facts (p. 64). Data & Society. Tsfati, Y., Boomgaarden, H., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E. (2020). Causes and consequences of mainstream media dissemination of fake news: Literature review and synthesis. Annals of the International Communication Association, 1–17. Tufekci, Z. (2014). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. Eighth International AAAI Conference on Weblogs and Social Media, 10. https://arxiv.org/abs/1403.7400 Uscinski, J., Enders, A., Klofstad, C., Seelig, M., Drochon, H., Premaratne, K., & Murthi, M. (2022a). Have beliefs in conspiracy theories increased over time? PLoS One, 17(7), e0270429. Uscinski, J., Enders, A. M., Klofstad, C., & Stoler, J. (2022b). Cause and effect: On the antecedents and consequences of conspiracy theory beliefs. Current Opinion in Psychology, 47, 101364. Van Duyn, E., & Collier, J. (2019). Priming and fake news: The effects of elite discourse on evaluations of news media. Mass Communication and Society, 22(1), 29–48. Viner, K. (2016). How technology disrupted the truth. The Guardian. https://www.theguardian.com/media/2016/jul/12/ how-technology-disrupted-the-truth Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. Wagner, M. C., & Boczkowski, P. J. (2019). The reception of fake news: The interpretations and practices that shape the consumption of perceived misinformation. Digital Journalism, 7(7), 870–885. Altay et al. 13 Wardle, C. (2021). Broadcast news’ role in amplifying Trump’s tweets about election integrity. First Draft. https://firstdraft- news.org:443/long-form-article/cable-news-trumps-tweets/ Watts, D. J., & Rothschild, D. M. (2017). Don’t blame the election on fake news. Blame it on the media. Columbia Journalism Review, 5. https://www.cjr.org/analysis/fake-news-media-elec- tion-trump.php World Risk Poll. (2020). “Fake news” is the number one worry for internet users worldwide. https://wrp.lrfoundation.org.uk/ wp-content/uploads/2020/08/World-Risk-Poll-Press-Release- Cyber-risk-061020-1.pdf Ylä-Anttila, T. (2018). Populist knowledge: “Post-truth” repertoires of contesting epistemic authorities. European Journal of Cultural and Political Sociology, 5(4), 356–388. https://doi.org /10.1080/23254823.2017.1414620 Author Biographies Sacha Altay is a postdoctoral fellow at the Reuters Institute for the Study of Journalism. He holds a PhD in cognitive science and his work mostly focuses on (mis)information and (mis)trust. Manon Berriche is a PhD student in sociology at Sciences Po medialab. Her research focuses on the reception of (mis)information in the digital age and aims to confront media discourse on “fake news” with the information practices of laypeople by using both quantitative and qualitative methods. Alberto Acerbi is a researcher in the field of cultural evolution. His research utilizes evolutionary and quantitative approaches to shed light on contemporary cultural phenomena. More recently, he became particularly interested in using cultural evolution theory to study specifically the effect of online digital media.