AMERICAN /MARKETING ASSOCIATION How Critical Are Critical Reviews? The Box Office Effects of Film Critics, Star Power, and Budgets Author(s): Suman Basuroy, Subimal Chatterjee and S. Abraham Ravid Source: Journal of Marketing, Vol. 67, No. 4 (Oct., 2003), pp. 103-117 Published by: American Marketing Association Stable URL: http://www.jstor.org/stable/30040552 Accessed: 26/02/2014 07:43 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. 9 STOR American Marketing Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of Marketing. http://www.jstor.org This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions Suman Basuroy, Subimal Chatterjee, & S. Abraham Ravid How Critical Are Critical Reviews? The Box Office Effects of Film Critics, Star Power, and Budgets The authors investigate how critics affect the box office performance of films and how the effects may be moderated by stars and budgets. The authors examine the process through which critics affect box office revenue, that is, whether they influence the decision of the film going public (their role as influencers), merely predict the decision (their role as predictors), or do both. They find that both positive and negative reviews are correlated with weekly box office revenue over an eight-week period, suggesting that critics play a dual role: They can influence and predict box office revenue. However, the authors find the impact of negative reviews (but not positive reviews) to diminish over time, a pattern that is more consistent with critics' role as influencers. The authors then compare the positive impact of good reviews with the negative impact of bad reviews to find that film reviews evidence a negativity bias; that is, negative reviews hurt performance more than positive reviews help performance, but only during the first week of a film's run. Finally, the authors examine two key moderators of critical reviews, stars and budgets, and find that popular stars and big budgets enhance box office revenue for films that receive more negative critical reviews than positive critical reviews but do little for films that receive more positive reviews than negative reviews. Taken together, the findings not only replicate and extend prior research on critical reviews and box office performance but also offer insight into how film studios can strategically manage the review process to enhance box office revenue. Critics play a significant role in consumers' decisions in many industries (Austin 1983; Cameron 1995; Caves 2000; Einhorn and Koelb 1982; Eliashberg and Shugan 1997; Goh and Ederington 1993; Greco 1997; Holbrook 1999; Vogel 2001; Walker 1995). For example, investors closely follow the opinion of financial analysts before deciding which stocks to buy or sell, as the markets evidenced when an adverse Lehman Brothers report sunk Amazon.com's stock price by 19% in one day (BusinessWeek 2000). Readers often defer to literary reviews before deciding on a book to buy (Caves 2000; Greco 1997); for example, rave reviews of Interpreter of Maladies, a short-story collection by the then relatively unknown Jhumpa Lahiri, made the book a New York Times best-seller (New York Times 1999). Diners routinely refer to reviews in newspapers and dining guides such as ZagatSurvey to help select restaurants (Shaw 2000). However, the role of critics may be most prominent in the film industry (Eliashberg and Shugan 1997; Holbrook 1999; West and Broniarczyk 1998). More than one-third of Suman Basuroy is Assistant Professor of Marketing, University at Buffalo, State University of New York. Subimal Chatterjee is Associate Professor of Marketing, School of Management, Binghamton University. S. Abraham Ravid is Professor of Finance and Economics, Rutgers University and Yale University School of Management. Ravid thanks the New Jersey Center for Research at Rutgers University and the Stern School at New York University for research support. All authors thank Kalpesh Desai, Paul Dho-lakia, Wagner Kamakura, Matt Clayton, Rob Engle, William Greene, Kose John, and the three anonymous JM reviewers for many helpful suggestions. The authors owe special thanks to Shailendra Gajanan, Subal Kumbhakar, and Nagesh Revankar for many discussions on econometrics. Journal of Marketing Vol. 67 (October 2003), 103-117 Americans actively seek the advice of film critics (The Wall Street Journal 2001), and approximately one of every three filmgoers say they choose films because of favorable reviews. Realizing the importance of reviews to films' box office success, studios often strategically manage the review process by excerpting positive reviews in their advertising and delaying or forgoing advance screenings if they anticipate bad reviews (The Wall Street Journal 2001). The desire for good reviews can go even further, thus prompting studios to engage in deceptive practices, as when Sony Pictures Entertainment invented the critic David Manning to pump several films, such as A Knight's Tale and The Animal, in print advertisements (Boston Globe 2001). In this article, we investigate three issues related to the effects of film critics on box office success. The first issue is critics' role in affecting box office performance. Critics have two potential roles: influencers, if they actively influence the decisions of consumers in the early weeks of a run, and predictors, if they merely predict consumers' decisions. Eliashberg and Shugan (1997), who were the first to define and test these concepts, find that critics correctly predict box office performance but do not influence it. Our results are mixed. On the one hand, we find that both positive and negative reviews are correlated with weekly box office revenue over an eight-week period, thus showing that critics can both influence and predict outcomes. On the other hand, we find that the impact of negative reviews (but not positive reviews) on box office revenue declines over time, a finding that is more consistent with critics' role as influencers. The second issue we address is whether positive and negative reviews have comparable effects on box office performance. Our interest in such valence effects stems from Box Office Effects of Film Critics /103 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions two reasons; the first is based on studio strategy and the second is rooted in theory. First, although we might expect the impact of critical reviews to be strongest in the early weeks of a run and to fall over time as studio buzz from new releases takes over, studios that understand the importance of positive reviews are likely to adopt tactics to leverage good reviews and counter bad reviews (e.g., selectively quote good reviews in advertisements). Intuitively, therefore, we expect the effects of positive reviews to increase over time and the effects of negative reviews to decrease over time. Second, we expect negative reviews to hurt box office performance more than positive reviews help box office performance. This expectation is based on research on negativity bias in impression formation (Skowronski and Carlston 1989) and on loss aversion in scanner-panel data (Hardie, Johnson, and Fader 1993). We find that the negative impact of bad reviews is significantly greater than the positive impact of good reviews on box office revenue, but only in the first week of a film's run (when studios, presumably, have not had time to leverage good reviews and/or counter bad reviews). The third part of our investigation involves examining how star power and budgets might moderate the impact of critical reviews on box office performance. We chose these two moderators because we believe that examining their effects on box office revenue in conjunction with critical reviews might provide a partial economic rationale for two puzzling decisions in the film industry that have been pointed out in previous works. The first puzzle is why studios are persistent in pursuing famous stars when stars' effects on box office revenue are difficult to demonstrate (De Vany and Walls 1999; Litman and Ahn 1998; Ravid 1999). The second puzzle is why, at a time when big budgets seem to contribute little to returns (John, Ravid, and Sunder 2002; Ravid 1999), the average budget for a Hollywood movie has steadily increased over the years. Our results show that though star power and big budgets seem to do little for films that receive predominantly positive reviews, they are positively correlated with box office performance for films that receive predominantly negative reviews. In other words, star power and big budgets appear to blunt the impact of negative reviews and thus may be sensible investments for the film studios. In the next section, we explore the current literature and formulate our key hypotheses. We then describe the data and empirical results. Finally, we discuss the managerial implications for marketing theory and practice. Theory and Hypotheses Critics: Their Functions and Impact In recent years, scholars have expressed much interest in understanding critics' role in markets for creative goods, such as films, theater productions, books, and music (Cameron 1995; Caves 2000). Critics can serve many functions. According to Cameron (1995), critics provide advertising and information (e.g., reviews of new films, books, and music provide valuable information), create reputations (e.g., critics often spot rising stars), construct a consumption experience (e.g., reviews are fun to read by themselves), and 104 / Journal of Marketing, October 2003 influence preference (e.g., reviews may validate consumers' self-image or promote consumption based on snob appeal). In the domain of films, Austin (1983) suggests that critics help the public make a film choice, understand the film content, reinforce previously held opinions of the film, and communicate in social settings (e.g., when consumers have read a review, they can intelligently discuss a film with friends). However, despite a general agreement that critics play a role, it is not clear whether the views of critics necessarily go hand in hand with audience behavior. For example, Austin (1983) argues that film attendance is greater if the public agrees with the critics' evaluations of films than if the two opinions differ. Holbrook (1999) shows that in the case of films, ordinary consumers and professional critics emphasize different criteria when forming their tastes. Many empirical studies have examined the relationship between critical reviews and box office performance (De Silva 1998; Jedidi, Krider, and Weinberg 1998; Litman 1983; Litman and Ahn 1998; Litman and Kohl 1989; Prag and Casavant 1994; Ravid 1999; Sochay 1994; Wallace, Seigerman, and Holbrook 1993). Litman (1983) finds that each additional star rating (five stars represent a "masterpiece" and one star represents a "poor" film) has a significant, positive impact on the film's theater rentals. Litman and Kohl's (1989) subsequent study and other studies by Litman and Ahn (1998), Wallace, Seigerman, and Holbrook (1993), Sochay (1994), and Prag and Casavant (1994) all find the same impact. However, Ravid (1999) tested the impact of positive reviews on domestic revenue, video revenue, international revenue, and total revenue but did not find any significant effect. Critics as Influencers or Predictors Although the previously mentioned studies investigate the impact of critical reviews on a film's performance, they do not describe the process through which critics might affect box office revenue. Eliashberg and Shugan (1997) are the first to propose and test two different roles of critics: influ-encer and predictor. An influencer, or opinion leader, is a person who is regarded by a group or by other people as having expertise or knowledge on a particular subject (Assael 1984; Weiman 1991). Operationally, if an influencer voices an opinion, people should follow that opinion. Therefore, we expect an influencer to have the most effect in the early stages of a film's run, before word of mouth has a chance to spread. In contrast, a predictor can use either formal techniques (e.g., statistical inference) or informal methods to predict the success or failure of a product correctly. In the case of a film, a predictor is expected to call the entire run (i.e., predict whether the film will do well) or, in the extreme case, correctly predict every week of the film's run. Ex ante, there are reasons to believe that critics may influence the public's decision of whether to see a film. Critics often are invited to an early screening of the film and then write reviews before the film opens to the public. Therefore, not only do they have more information than the public does in the early stages of a film's run, but they also are the only source of information at that time. For example, Litman (1983) seems to refer to the influencer role in his argument that critical reviews should be important to the This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions popularity of films (1) in the early weeks before word of mouth can take over and (2) if the reviews are favorable. However, Litman was unable to test this hypothesis directly because his dependent variable is cumulative box office revenue. To better assess causation, Wyatt and Badger (1984) designed experiments using positive, mixed, and negative reviews and found audience interest to be compatible with the direction of the review. However, because their study is based on experiments, they do not use box office returns as the dependent variable. Inferring critics' roles from weekly correlation data. In our research, we follow Eliashberg and Shugan's (1997) procedure. We study the correlation of both positive and negative reviews with weekly box office revenue. However, even with weekly box office data, we argue that it is not easy to distinguish between critics as influencers and as predictors. We illustrate this point by considering three different examples of correlation between weekly box office revenue and critical reviews. For the first example, suppose that critical reviews are correlated with the box office revenue of the first few weeks but not with the film's entire run. A case in point is the film Almost Famous, which received excellent reviews (of 47 total reviews reported by Variety, 35 were positive and only 2 were negative) and had a good opening week ($2.4 million on 131 screens, or $18,320 revenue per screen) but ultimately did not do considerably well (grossing only $32 million in about six months). This outcome is consistent with the interpretation that critics influenced the early run but did not correctly predict the entire run. Another interpretation is that critics correctly predicted the early run without necessarily influencing the public's decision but did not predict the film's entire run. For the second example, suppose that critical reviews are correlated not with a film's box office revenue in the first few weeks but with the box office revenue of the total run. The films Thelma and Louise and Blown Away appear to fit this pattern. Thelma and Louise received excellent reviews and had only moderate first-weekend revenue ($4 million), but it eventually became a hit ($43 million; Eliashberg and Shugan 1997, p. 72). In contrast, Blown Away opened successfully ($10.3 million) despite bad reviews but ultimately did not do well. In the first case, critics correctly forecasted the film's successful run (despite a bad opening); in the second case, critics correctly forecasted the film's unsuccessful run (despite a good opening). In both examples, the performance in the early weeks countered critical reviews. Our interpretation is that critics did not influence the early run but were able to predict the ultimate box office run correctly. Eliashberg and Shugan (1997) find precisely such a pattern (i.e., critical reviews are not correlated with the box office revenue of early weeks but are significantly correlated with the box office revenue of later weeks and with cumulative returns during the run); they conclude that critics are predictors, not influencers. For the third example, suppose that critical reviews are correlated with weekly box office revenue for the first several weeks (i.e., not just the first week or two) and with the entire run. Consider the films 3000 Miles to Graceland (a box office failure) and The Lord of the Rings: The Fellow- ship of the Ring (a box office success). Critics trashed 3000 Miles to Graceland (of 34 reviews, 30 were negative), it had a dismal opening weekend ($7.16 million on 2545 screens, or $3,000 per screen), and it bombed at the box office ($15.74 million earned in slightly more than eight weeks). The Lord of the Rings: The Fellowship of the Ring opened to great reviews (of 20 reviews, 16 were positive and 0 were negative), had a successful opening week ($66.1 million on 3359 screens, or approximately $19,000 per screen), and grossed $313 million. In both cases, critics either influenced the film's opening and correctly predicted its eventual fate or correctly predicted the weekly performance over a longer period and its ultimate fate. These three examples demonstrate that it is not easy to distinguish critics' different roles (i.e., influencer, predictor, or influencer and predictor) on the basis of weekly box office revenue. Broadly speaking, if critics influence only a film's box office run, we expect them to have the greatest impact on early box office revenue (perhaps in the first week or two). In contrast, if critics predict only a film's ultimate fate, we expect their views to be correlated with the later weeks and the entire run, not necessarily with the early weeks. Finally, if critics influence and predict a film's fate or correctly predict every week of a film's run, we expect reviews to be correlated with the success or failure of the film in the early and later weeks and with the entire run. The following hypotheses summarize the possible links among critics' roles and box office revenue: Hj: If critics are influencers, critical reviews are correlated with box office revenue in the first few weeks only, not with box office revenue in the later weeks or with the entire run. H2: If critics are predictors, critical reviews are correlated with box office revenue in the later weeks and the entire run, not necessarily with box office revenue in previous weeks. H3: If critics are both influencers and predictors or play an expanded predictor role, critical reviews are correlated with box office revenue in the early and later weeks and with the entire run. Inferring critics' roles from the time pattern of weekly correlation. Several scholars have argued that if critics arc influencers, they should exert the greatest impact in the first week or two of a film's run because little or no word-of-mouth information is yet available. Thereafter, the impact of reviews should diminish with each passing week as information from other sources becomes available (e.g., people who have already seen the film convey their opinions, more people see the film) and as word of mouth begins to dominate (Eliashberg and Shugan 1997; Litman 1983). However, the issue is not clear-cut: If word of mouth agrees with critics often enough, a decline may be undetectable, but if critics are perfect predictors, such a decline cannot be expected. In other words, if there is a decline in the impact of critical reviews over time, it is consistent with the influencer perspective. Thus: H4: If critics are influencers, the correlation of critical reviews with box office revenue declines with time. Valence of Reviews: Negativity Bias Researchers consistently have found differential impacts of positive and negative information (controlled for magnitude) Box Office Effects of Film Critics /105 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions on consumer behavior. For example, in the domain of risky choice, Kahneman and Tversky (1979) find that utility or value functions are asymmetric with respect to gains and losses. A loss of $1 provides more dissatisfaction (negative utility) than the gain of $1 provides satisfaction (positive utility), a phenomenon that the authors call "loss aversion." The authors also extend this finding to multiattribute settings (Tversky and Kahneman 1991). A similar finding in the domain of impression formation is the negativity bias, or the tendency of negative information to have a greater impact than positive information (for a review, see Skowron-ski and Carlston 1989). On the basis of these ideas, we surmise that negative reviews hurt (i.e., negatively effect) box office performance more than positive reviews help (i.e., positively affect) box office performance. Two studies lend further support to this idea. First, Yamaguchi (1978) proposes that consumers tend to accept negative opinions (e.g., a critic's negative review) more easily than they accept positive opinions (e.g., a critic's positive review). Second, recent research suggests that the negativity bias operates in affective processing as early as the initial categorization of information into valence classes (e.g., the film is "good" or "bad"; Ito et al. 1998). Thus, we propose the following: H5: Negative reviews hurt box office revenue more than positive reviews help box office revenue. Moderators of Critical Reviews: Stars and Budgets Are there any factors that moderate the impact of critical reviews on box office performance? We argue that two key candidates are star power and budget. We believe that examining the effects of these two moderators on box office revenue in conjunction with critical reviews may provide a partial economic rationale for the two previously mentioned puzzling film industry decisions about pursuing stars and making big-budget films. In the following paragraphs, we elaborate on this issue by examining the literature on star power and film budgets. Star power has received considerable attention in the literature (De Silva 1998; De Vany and Walls 1999; Holbrook 1999; Levin, Levin, and Heath 1997; Litman 1983; Litman and Ahn 1998; Litman and Kohl 1989; Neelamegham and Chintagunta 1999; Prag and Casavant 1994; Ravid 1999; Smith and Smith 1986; Sochay 1994; Wallace, Seigerman, and Holbrook 1993). Hollywood seems to favor films with stars (e.g., award-winning actors and directors), and it is almost axiomatic that stars are key to a film's success. However, empirical results of star power on box office performance have produced conflicting evidence. Litman and Kohl (1989) and Sochay (1994) find that stars' presence in a film's cast has a significant effect on that film's revenue. Similarly, Wallace, Seigerman, and Holbrook (1993, p. 23) conclude that "certain movie stars do make [a] demonstrable difference to the market success of the films in which they appear." In contrast, Litman (1983) finds no significant relationship between a star's presence in a film and box office rentals. Smith and Smith (1986) find that winning an award had a negative effect on a film's fate in the 1960s but a positive effect in the 1970s. Similarly, Prag and Casavant (1994) find that star power positively affects a film's financial success in some samples but not in others. De Silva (1998) finds that stars are an important factor in the public's attendance decisions but are not significant predictors of financial success, a finding that is documented in subsequent studies as well (De Vany and Walls 1999; Litman and Ahn 1998; Ravid 1999). Film production budgets also have received significant attention in the literature on motion picture economics (Litman 1983; Litman and Ahn 1998; Litman and Kohl 1989; Prag and Casavant 1994; Ravid 1999).1 In 2000, the average cost of making a feature film was $54.8 million (see Motion Picture Association of America [MPAA] 2002). Big budgets translate into lavish sets and costumes, expensive digital manipulations, and special effects such as those seen in the films Jurassic Park ($63 million budget, released in 1993) and Titanic ($200 million budget, released in 1997). Ravid (1999) and John, Ravid, and Sunder (2002) show that though big budgets are correlated with higher revenue, they are not correlated with returns. If anything, low-budget films appear to have higher returns. What, then, do big budgets do for a film? Litman (1983) argues that big budgets reflect higher quality and greater box office popularity. Similarly, Litman and Ahn (1998, p. 182) suggest that "studios feel safer with big budget films." In this sense, big budgets can serve as an insurance policy (Ravid and Basuroy 2003). Although the effects of star power and budgets on box office returns may be ambiguous at best, the question remains as to whether these two variables act jointly with critical reviews, as we believe they do, to affect box office performance. For example, suppose that a film receives more positive than negative reviews. If the film starts its run in a positive light, other positive dimensions, such as stars and big budgets, may not enhance its box office success. However, consider a film that receives more negative than positive reviews. In this case, stars and big budgets may help the film by blunting some effects of negative reviews. Levin, Levin, and Heath (1997) suggest that popular stars provide the public with a decision heuristic (e.g., attend the film with the stars) that may be strong enough to blunt any negative critic effect. Conversely, as Levin, Levin, and Heath explain (p. 177), when a film receives more positive than negative reviews, it is "less in need of the additional boost provided by a trusted star." Similarly, Litman and Ahn (1998) suggest that budgets should increase a film's entertainment value and thus its probability of box office success, which consequently compensates for other negative traits, such as bad reviews. On the basis of these arguments, we propose the following: H6: For films that receive more negative than positive reviews, star power and big budgets positively affect box office performance; however, for films that receive more positive than negative reviews, star power and big budgets do not affect box office performance. 'In investigating the role of budgets in a film's performance, we need to disentangle the effects of star power from budgets, because it could be argued that expensive stars make the budget a proxy for star power. However, in our data there is extremely low correlation between the measures of star power and budget, suggesting that the two measures are unrelated. 106 / Journal of Marketing, October 2003 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions Methodology Data and Variables Our data include a random sample of 200 films released between late 1991 and early 1993; most of our data are identified in Ravid's (1999) study. We first pared down the sample because of various missing data for 175 films. We gathered our data from two sources: Baseline in California (http://www.baseline.hollywood.com) and Variety magazine. Although some studies have focused on more successful films, such as the top 50 or the top 100 in Variety lists (De Vany and Walls 1997; Litman and Ahn 1998; Smith and Smith 1986), our study contains a random sample of the films (both successes and failures). Our sample contains 156 MPAA-affiliated films and 19 foreign productions, and it covers approximately one-third of all MPAA-affiliated films released between 1991 and 1993 (475 MPAA-affiliated films were released between 1991 and 1993; see Vogel 2001, Table 3.2). In our sample, 3.2% of the films are rated G; 14.7%, PG; 26.3%, PG-13; and 55.7%, R. This distribution closely matches the distribution of all films released between 1991 and 1993 (1.5%, G; 15.8%, PG; 22.1%, PG-13; and 60.7%, R; see Creative Multimedia 1997). Weekly domestic revenue. Every week, Variety reports the weekly domestic revenue for each film. These figures served as our dependent variables. Most studies cited thus far do not use weekly data (see, e.g., De Vany and Walls 1999; Litman and Ahn 1998; Ravid 1999). Given our focus and our procedure, the use of weekly data is critical. Valence of reviews. Variety lists reviews for the first weekend in which a film opens in major cities (i.e., New York; Los Angeles; Washington, D.C.; and Chicago). To be consistent with Eliashberg and Shugan's (1997) study, we collected the number of reviews from all these cities. Variety classifies reviews as "pro" (positive), "con" (negative), and "mixed." For the review classification, each reviewer is called and asked how he or she rated a particular film: positive, negative, or mixed. We used these classifications to establish measures of critical review assessment similar to those Eliashberg and Shugan use. Unlike Ravid's (1999) study and consistent with that of Eliashberg and Shugan, our study includes the total number of reviews (TOTNUM) from all four cities. For each film, POSNUM (NEGNUM) is the number of positive (negative) reviews a film received, and POSRATIO (NEGRATIO) is the number of positive (negative) reviews divided by the number of total reviews. Star power. For star power, we used the proxies that Ravid (1999) and Litman and Ahn (1998) suggest. For each film, Baseline provided a list of the director and up to eight cast members. For our first definition of star, we identified all cast members who had won a Best Actor or Best Actress Academy Award (Oscar) in prior years (i.e., before the release of the film being studied). We created the dummy variable WONAWARD, which denotes films in which at least one actor or the director won an Academy Award in previous years. Based on this measure, 26 of the 175 films in our sample have star power (i.e., WONAWARD =1). For our second measure, we created the dummy variable TOP 10, which has a value of 1 if any member of the cast or the director participated in a top-ten grossing film in previous years (Litman and Ahn 1998). Based on this measure, 17 of the 175 films in our sample possess star power (i.e., TOP10 = 1). For our third and fourth measures, we collected award nominations for Best Actor, Best Actress, and Best Directing for each film in the sample and defined two variables, NOMAWARD and RECOGNITION. The first variable, NOMAWARD, receives a value of 1 if one of the actors or the director was previously nominated for an award. The NOMAWARD measure increases the number of films with star power to 76 of 175. The second variable, RECOGNITION, measures recognition value. For each of the 76 films in the NOMAWARD category, we summed the total number of awards and the total number of nominations, which effectively creates a weight of 1 for each nomination and doubles the weight of an actual award to 2 (e.g., if an actor was nominated twice for an award, RECOGNITION is 2; if the actor also won an award in one of these cases, the value increases to 3). We thus assigned each of the 76 films a numerical value, which ranged from a maximum of 15 (for Cape Fear, directed by Martin Scorsese and starring Robert De Niro, Nick Nolte, Jessica Lange, and Juliette Lewis) to 0 for films with no nominations (e.g., Curly Sue). Budgets. Baseline provided the budget (BUDGET) of each film; the trade term for budget is "negative cost," or production costs (Litman and Ahn 1998; Prag and Casavant 1994; Ravid 1999). The budget does not include gross participation, which is ex post share of participants in gross revenue, advertising and distribution costs, or guaranteed compensation, which is a guaranteed amount paid out of revenue if revenue exceeds the amount. Other control variables. We used several control variables. Each week, Variety reports the number of screens on which a film was shown that week. Eliashberg and Shugan (1997) and Elberse and Eliashberg (2002) find that the number of screens is a significant predictor of box office revenue. Thus, we used SCREEN as a control variable. Another worthwhile variable reflects whether a film is a sequel (Litman and Kohl 1989; Prag and Casavant 1994; Ravid 1999). The SEQUEL variable receives a value of 1 if the movie is a sequel and a value of 0 otherwise. There are 11 sequels in our sample. The industry considers MPAA ratings an important issue (Litman 1983; Litman and Ahn 1989; Ravid 1999; Sochay 1994). In our analysis, we coded ratings using dummy variables; for example, a dummy variable G has a value of 1 if the film is rated G and a value of 0 otherwise. Some films are not rated for various reasons; those films have a value of 0. Finally, our last control variable is release date (RELEASE). In some studies (Litman 1983; Litman and Ahn 1998; Litman and Kohl 1989; Sochay 1994), release dates are used as dummy variables, following the logic that a high-attendance-period release (e.g., Christmas) attracts greater audiences and a lower-attendance-period (e.g., early December) release is bad for revenue. However, because there are several peaks and troughs in attendance throughout the year, we used information from Vogel's (2001, Figure 2.4) study to produce a more sophisticated measure of seasonality. Vogel constructs a graph that depicts normalized weekly attendance over the year (based on Box Office Effects of Film Critics /107 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions TABLE 1 Variables and Correlations BUDGET RELEASE POSRATIO NEGRATIO TOTNUM POSNUM NEGNUM WONAWARD BUDGET 1.00 Mean = 15.68 S.D. = 13.90 RELEASE .004 1.00 Mean = .63 S.D. = .16 POSRATIO -.131 .017 1.00 Mean =.43 S.D. = .24 NEGRATIO .042 -.068 -.886 1.00 Mean = .31 S.D. = .22 TOTNUM Mean = 34.22 S.D. = 17.46 .605 .150 .252 -.341 1.00 POSNUM Mean = 15.81 S.D. = 12.03 .283 .056 .740 -.704 .760 1.00 NEGNUM Mean = 9.23 S.D. = 7.06 .498 .124 -.579 .556 .448 -.179 1.00 WONAWARD Mean = .15 S.D. = .36 .358 .077 .126 .139 .430 .379 .169 1.00 Notes: S.D. = standard deviation. 1969-84 data) and assigns a value between 0 and 1 for each date in the year (Christmas attendance is 1 and early December attendance is .37; these are high and low points of the year, respectively). We matched each release date with the graph and assigned the RELEASE variable to account for seasonal fluctuations. Results Table 1 reports the correlation matrix for the key variables of interest. The ratio of positive reviews, POSRATIO, is negatively correlated with the ratio of negative reviews, NEGRATIO; that is, not many films received several negative and positive reviews at the same time. The most expensive film in the sample cost $70 million (Batman Returns) and is the film that has the highest first-week box office revenue ($69.31 million), opening to the maximum number of screens nationwide (3700). In our sample, the average number of first-week screens is 749, the average first-week box office return is $5.43 million, and the average number of reviews received is 34 (43% positive, 31% negative). Using a sample of 56 films, Eliashberg and Shugan (1997, p. 47) reported 47% positive reviews and 25% negative reviews. In our sample, Beauty and the Beast had the highest revenue per screen ($117,812 per screen, for two screens) and the highest total revenue ($426 million). The Role of Critics Hi-H4 address critics' role as influencers, predictors, or both. To test the hypotheses, we ran three sets of tests. First, we replicated Eliashberg and Shugan's (1997) model by running separate regressions for each of the eight weeks; we included only three predictors (POSRATIO or NEGRATIO, SCREEN, and TOTNUM). In the second test, we expanded Eliashberg and Shugan's framework by including our control variables in the weekly regressions. In the third test, we ran time-series cross-section regression that combined both cross-sectional and longitudinal data in one regression, specifically to control for unobserved heterogeneity. The replications of Eliashberg and Shugan's (1997) results are reported in Tables 2 and 3. The coefficients of both positive and negative reviews are significant at .01 for each of the eight weeks, and they seem to support H3. Critics both influence and predict box office revenue, or they predict consistently across all weeks. We added the control variables to the regressions. Tables 4 and 5 report the results of this set of regressions.2 The results confirm what is evident in Tables 2 and 3: The critical reviews, both positive and negative, remain significant for every week. For the first four weeks, SCREEN appears to have the most significant impact on revenue, followed by BUDGET and POSRATIO (NEGRATIO). After four weeks, BUDGET becomes insignificant, and critical reviews become the second most important factor after screens. In general, the R2 and adjusted R2 are greater than those in 2Although we report the results using one of the four possible definitions of star power, WONAWARD, rerunning the regressions using the other three measures of star power does not change the results. Tables 2 and 3, suggesting an enhanced explanatory power of the added variables. For the third test, we ran time-series cross-section regressions (see Table 6; Baltagi 1995; Hsiao 1986, p. 52).3 In this equation, the variable SCREEN varies across films and across time; the other predictors and control variables vary across films but not across time. We also created a new variable, WEEK, which has a value between 1 and 8 and thus varies across time but not across films. In this regression, we added an interaction term (POSRATIO x WEEK or NEGRATIO x WEEK) to assess the declining impact of critical reviews over time. The results support H3 and partially support H4. The coefficient of positive and negative reviews remains highly significant (Ppositive - 3.32,/? < .001; Pnegative = -5.11, p < .001), pointing to the dual role of critics (H3). However, the interaction term is not significant for positive reviews, but it is significant for negative reviews, suggesting a declining impact of negative reviews over time, which is partially consistent with critics' role as influencers. These results are somewhat different from Eliashberg and Shugan's (1997) findings (i.e., critics are only predictors) and Ravid's (1999) results (i.e., there is no effect of positive reviews). There are several reasons our results differ from those of Eliashberg and Shugan. First, although they included only those films that had a minimum eight-week run, our sample includes films that ran for less than eight weeks as well. We did so to accommodate films with short box office runs. Second, the size of our data set is three times as large as that of Eliashberg and Shugan (175 films versus 56). Third, our data set covers a longer period (late 1991 to early 1993) than their data set, which only covers films released between 1991 and early 1992. Fourth, we selected the films in our data set completely at random, whereas Eliashberg and Shugan, as they note, were more restrictive. Similarly, our results may differ from those of Ravid because we included reviews from all cities reported in Variety, not only New York, and we used weekly revenue data rather than the entire revenue stream. Negative Versus Positive Reviews H5 predicts that negative reviews should have a disproportionately greater negative impact on box office reviews than the positive impact of positive reviews. Because the percentages of positive and negative reviews are highly correlated (see Table 1; r = -.88), they cannot be put into the same model. Instead, we used the number of positive (POSNUM) and negative (NEGNUM) reviews, because they are not correlated with each other (see Table 1; r = .17), and thus both variables can be put into the same regression model. We expected the coefficient of NEGNUM to be negative, and thus there may be some evidence for negativity bias if IPnegnumI is greater than |PposnumI- Table 7 reports the results of our time-series cross-section regression. Although Pnegnum is negative and significant (Pnegnum = --056, t = -2.29, p < .02) and Pposnum is positive and significant (Pposnum = -032, t = 2.34, p < .01), their difference (IPnegnumI - IPposnumI) is not significant (Pi, 1108 < !)• In some sense, we expected this pattern 3We thank an anonymous reviewer for this suggestion. Box Office Effects of Film Critics /109 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions (0 E > 0) QC > '55 o cl o> a> c 0) u Ql 5 (0 a> rx iii uj oc o UJ w CO a) cc 05 (0 c re o> V) ■o c re o) Q) re Ü] c g re o Q. cc o a> re is ■Bf 0) 3 15 J? (b > V) 6. tj _ a> « _ ■— c N c U U i ü 0) " S £ 5 Z < QC 0) O Ql (A 3 *• «0 CD ■> -4-1 *\ 'S 'o _ n g ff- ~ c N c «> o >= ö I 8 Si 0) 3 re ■> ■*-• . (0 6. ■— c n c Isss 3 w QC •O oi re QC to 3 re I oooooooo nowoioosonowooio^o oonomooonoNomoro t- v -i- v a) v m ■-OCDO K V CD V CO V^-^tD V CD V Cvi V IT) V OCOCOCDi-->-!-OOCMI^CDCDCD->-CM--> CDKCDI^WC^CDCOOCMCDCOCDCDCOr^ COOinin^i-COCOCOCOCMOOCM^CMr^ oiooi-osoNo^o^oiooin OCOOOOOSOOOOOOOOONOtO •>t (D. Ul -i- cm 00 (m a> cd cm o m co CDCOOO^COCMCOCDCDCDin-'-OOW'fr ^■i-NOCMOCOt-IO^i-O^IDOJCO CD OO-r-OOinT-CDCMI^COCOCDi-CM l^i-05-*COin-*OlOLO-i-OCOLf)LOCM cos^ncoNT-Nonor-oNoio OOOi-Oi-OOOOOOOOOO ■ o- ■ o. ■ • o- ■ ^ |- |- f |- f f cd o O cm m cd co cm cd cm cm o T— co o in o cd o o o co o co o o cd q i— q q cd q q o q m q q cm co cm cm CO co co in CM in CD 1^ 00 o 1^ O CD t— CD m oo CM CM CD m CO t— o CM CD CD CD 00 CM o 00 o o ■si- CM m a> in m O •>* 00 m 00 o o CD O CM •7— N- t— in CM CM CM t— in CO c\i t-1 CM ~—- 00 N CD CM CD ^1- m m CD CO CO CO 00 CO i- CM h~ CD 1— CM CD T— CD CO T- CO CM CM CM y- in t— t— CO CM o o CD o CD h- N CO CO in m CD CM in CD CM O CM CD m CO CO CO CO CM II II II II II II II II c c c c c C T- CM CO in CD OO 110 / Journal of Marketing, October 2003 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions TABLE 3 Replication of Eliashberg and Shugan's (1997) Regression Results with Percentage of Negative Reviews NEGRATIO TOTNUM SCREEN Unstandardized Unstandardized Unstandardized Coefficient Coefficient Coefficient R2 (Standardized t-Statistic (Standardized t-Statistic (Standardized t-Statistic F-Ratio Week (Adjusted R2) Coefficient) (p-Value) Coefficient) (p-Value) Coefficient) (p-Value) (p-Value) 1 (n = 162) .7290 -6.05792 -3.18 .0285 1.10 .00888 .17.80 142.58 (.7239) (-.1525) (.0018) (.05479) (.2738) (.84904) (<.0001) (<.0001) 2 (n = 154) .7273 -5.10837 -3.53 .04204 2.22 .00598 16.51 134.26 (.7219) (-.17391) (.0005) (.11328) (.0276) (.82294) (<.0001) (<.0001) 3 (n = 145) .6518 -3.39389 -2.59 .03451 1.98 .00447 13.16 88.59 (.6444) (-.14618) (.0105) (.11819) (.0496) (.76423) (<.0001) (<.0001) 4(n = 139) .7118 -1.97242 -2.38 .01486 1.26 .00355 15.37 111.95 (.7054) (-.12094) (.0187) (.07007) (.2090) (.81431) (<.0001) (<.0001) 5 (n = 137) .7298 -1.78567 -2.89 .00418 .49 .003 16.60 120.63 (.7237) (-.14178) (.0044) (.02621) (.6252) (.83882) (<.0001) (<.0001) 6 (n = 132) .7065 -1.73476 -2.95 -.00368 -.46 .00299 16.10 103.52 (.6997) (-.14911) (.0038) (-.02515) (.6465) (.84649) (<.0001) (<.0001) 7 (n = 130) .5604 -2.10310 -2.76 -.00606 -.60 .00296 11.79 53.97 (.5500) (-.1672) (.0066) (-.03903) (.5503) (.7576) (<.0001) (<.0001) 8 (n = 122) .6945 -1.20867 -2.68 -.00704 -1.18 .00261 15.39 90.20 (.6868) (-.13982) (.0083) (-.06662) (.2408) (.85507) (<.0001) (<.0001) Notes: Dependent variable is weekly revenue. Method is separate regressions for each week. in a 3 « c o o Im V JZ c (0 in % 1 DC 0) > "55 £ o d> O) ra *■» c a) Q. OC ■O IS < (0 LII 3 (0 _ a> m oc o "55 in a> i_ O) a> DC > £ V 3 C > o DC a> u £ o x o ffi c o in 5 a> '> a> DC 15 o O LII a LU UJ OC o 0) • o °- a. mo UJ oc uj 63 (5 if o « O to a) * CM * * * * Is- * o * 03 * in a> co m CO r- CO 6 c\i cd cri Is-' m co CO CO CM *~ CM CO oo CO CO CO CM co CO Is- co o co o r- O h- Is- e- Is- m Is- Is- 00 00 00 Is- CO a> m CVJ m co CM CO Is- Is- Is- Is- Is. co Is- NCOmN-COi-COCOCOCOOOCNICOCOCOi- ocoooJOooinocooenocDOCM O0!O(0O(0ONO00O00ONO01 cocoois-ooo- 00 00 cm~ CD Is- 00 in cm cm 00 Is- cm T— ^— m Is- m i- Is-Is- 00 1- co cm i— co q cm O Is- o m O Is- O 00 O cm o If) —" O- l' X l" —^ r X l' X i" x com-tf-i-i-cowococo-tfincoi-Nco ooinooioroooNowofflo co ' i" r r iv^ ■ ^ ' ■ -:' ct> ^ „ ^ ^t- Is- worocoOi-in' cm 0 co 0 o> 1^ ■r- 00 00 in cd CT) cm o 0 00 T— T- cd t- cm m cm t- cd Is- in 1- co q co 0 co o Is- o cm o cm t— co q in 1 Xt f X i" X 1' x ^_ * to 1^- ,_^ ,—, X « m 0 Is- cd m T- co cd T— Is- co in 1- cm m co n cm m n ^- co cd n cm O 03 0 03 0 m 0 Is- T- in O O in « * co * * 03 co 00 Is- 03 o J— cm cm o co m in CT> q q 03 i i cm 1 T T T T r cm in 03 Is- cm 0 cm cd m co co co co cm II 11 11 II II II 11 II C c c c c c c C cm co in co is- co CD ;p ■ ■■ ■ V f> 112/ Journal of Marketing, October 2003 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions IB cc 0) 1st < z UJ UJ cc o 6£ 3 UJ mo UJ UJ »8 UJ ■ (/) DC S 0 = (5 O OL , Q c c o * o * * * co He oo + CM * Is- in o cvi to N Oi CO TT r-- 00 O) CO m CVJ in CD CM o CVI Is- Is- Is- Is- Is- CO Is- + ^* ^* .—.* -—-* ,—-O,—. CO ,—. NOJLnO^t-CMCOOCOCJCO-r-CvjTtCNJ-i-OCOOi-OOO^OCOOOOCOOCVI OIOONOtOOSOOOCOONOCS Mi-cocon^oNwinNcoifiooN NoosooMnT-TfiomocO'tn^o) i-T-^i-mi-coi-fflT-coi-coi-coo Is- l ■ m co - I ^7 ^7 ^7 I I I * ^« ^co^« ^ ^ CO CVIi-T-inCRCVIOOh-Tri-COi-COCVIOOCO NsaooioinioocMOWT-coi-a) •i-CVIOCVIOCVJOt-OOOOOOOi- ' — ' °- ' °- ' °- ' ^- r r i' r r ^ ^00 . 00 T- m co t in o> T- co I I I I I I I I I I .—- .—. cd ^ ^ ^ cocoininco^coNcviooincvicooocooo ■t^T-ototcoT-ttinwi-noT-w LOOCVIOCVIOCOOOJOCOOt-OCDO cvi ^ ' |' '° ' " '°- ' -J- cooT^^'t^in^onooooocvT'^-or cocooi-Oi-ocvii--cm'* NonowoO'-inoT-ooowo ■r r 1' iv^;^ ■ ^ \ 1' r r lo CVI " „ ^ m es ^ ^* Ä ^ ^ ■t-cocoinh-mcÄcococo-^h-cocvitcvi 00->-OC003C0a)C0C0CMO0>i-00CVICVI cooT-oooinocoir-coomoin-r- o o) cm in 05 T- CVI CO CO O 00 1— r o 00 CVI co CVI co CO m m co o o 1- f " f CVI ■"t in Is- CVI 0 CVI CO in co co CO CO CVI II II 11 II II II II II c c c c c c c c 1— CVI CO in CO t- 00 03 C ■a CD ■c o a cd a> CD a) N CO ■o c cfl 55 CD CD A as ca > c CD T> C 0) Q. CD ^ SS ^ Q P j v » V oil Box Office Effects of Film Critics /113 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions TABLE 6 Effect of Critical Reviews on Box Office Revenue (Fuller-Battese Estimations) Using Percentage of Positive Reviews Using Percentage of Negative Reviews Significance Significance Variable Coefficient t-Value (p-Value) Coefficient t-Value (p-Value) Constant -1.42 -.98 .33 2.14 1.33 .18 WONAWARD .58 1.46 .14 .69 1.59 .11 G -1.18 -1.07 .28 -1.46 -1.19 .23 PG .102 .10 .91 -.33 -.31 .75 PG-13 -.042 -.04 .96 -.48 -.46 .64 R .22 .24 .81 -.16 -.16 .86 TOTNUM -.006 -.52 .60 -.007 -.59 .55 RELEASE 1.02 1.21 .22 .77 .82 .41 SEQUEL .73 1.30 .20 .55 .89 .37 BUDGET .032 2.24 .02 .023 1.47 .14 POSRATIO 3.321 3.33 .00 NEGRATIO -5.11 -4.41 .00 SCREEN .005 22.06 .00 .005 21.79 .00 WEEK -.436 -2.23 .02 -.55 -2.38 .01 POSRATIO x WEEK -.023 -.14 .89 NEGRATIO x WEEK .42 2.17 .03 R2 .47 .43 Hausman test for random effects M = 1.00 .60 M = 2.00 .36 Notes: Dependent variable is weekly revenue; method is time-series cross-section regression. N = 159. TABLE 7 Tests for Negativity Bias Fuller-Battese Estimation Week 1 Regression Week 1 + Week 2 Regression Constant .53 (.38) -2.94 (-1.34) -2.47 (-1.56) WONAWARD .55 (1.39) .08 (.07) .41 (.56) G -1.65 (-1.50) -6.21 (-2.46)* -4.43 (-2.47)* PG -.58 (-.62) -2.09 (-1.00) -1.39 (-.93) PG-13 -.71 (-.78) -1.50 (-.74) -1.45 (.99) R -.46 (-.51) -1.22 (-.63) -1.13 (-.81) RELEASE 1.10 (1.31) 3.55 (1.70)*** 2.45 (1.67)*' SEQUEL .64 (1.14) 4.85 (3.37)* 3.45 (3.51)* BUDGET .03 (2.17)** .18 (5.05)* .15 (5.76)* Pposnum .032 (2.34)** .052 : (1.60) .055 (2.40)* Pnegnum -.056 (-2.29)** -.209 (-3.42)* -.148 (-3.49)* SCREEN .005 (22.70)* .007 (12.82)* .006 (15.46)* WEEK -.446 (-2.33)* — — F-value for IPnegnumI - IPposnumI .54, N.S. 3.76* N 159 162 317 R2 .471 .798 .736 *p< .01. **p < .05. *"p<.1. Notes: Dependent variable is weekly revenue; methods are time-series cross-section regression and weekly regressions (Week 1 and Week 1 + Week 2). The t-values are reported in parentheses. N.S. = not significant. because we found that negative reviews, but not positive reviews, diminish in impact over time. A stronger test for the negativity bias should then focus on the early weeks (the first week in particular) when the studios have not had the opportunity to engage in damage control. As we expected, the negativity bias is strongly supported in the first week. Although Pnegnum is negative and significant ((3NEGNum = - 209, t = -3.42, p < .0001), Pposnum is not significant (Pposnum = .052, t = 1.60, p = not significant), and their difference (IPnegnumI - IPposnumI) is significant (Fh 151 = 3.76, p < .05). Separate weekly regressions on the subsequent weeks (Week 2 onward) did not produce a significant difference between the two coefficients. The combined data for the first two weeks show evidence of negativity bias (Table 7). It is possible that the negativity bias is confounded by perceived reviewer credibility. When consumers read a pos- 114 / Journal of Marketing, October 2003 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions itive review, they may believe that the reviewers have a studio bias. In contrast, they may perceive a negative review as more likely to be independent of studio influence. To separate the effects of credibility from negativity bias, we ran an analysis that included only the reviews of two presumably universally credible critics: Gene Siskel and Roger Ebert.4 We were only able to locate their joint reviews for 72 films from our data set; of these films, 32 received two thumbs up, 10 received two thumbs down, and 23 received one thumb up. We coded three dummy variables: TWOUP (two thumbs up), TWODOWN (two thumbs down), and UP&DOWN (one thumb up). In the regressions, we used two of the dummy variables: TWOUP and TWODOWN. The results confirmed our previous findings. The coefficient of TWODOWN is significantly greater than that of TWOUP in both the first week (Ptwodown = -6-51, Ptwoup = -32; Fj 57 = 4.95, p < .03) and the entire eight-week run (Ptwodown = -2-28. Ptwoup = -42; F,, 501 = 3.46, p < .06). Star Power, Budgets, and Critical Reviews H6 predicts that star power and big budgets can help films that receive more negative than positive reviews but do little for films that receive more positive than negative reviews. Because we made separate predictions for the two groups of films (POSNUM - NEGNUM < 0 and POSNUM -NEGNUM > 0), we split the data into two groups. The first 4We thank an anonymous reviewer for this suggestion. group contains 97 films for which the number of negative reviews is greater than or equal to that of positive reviews, and the second group contains the remaining 62 films for which the number of positive reviews exceeds that of negative reviews. We ran time-series cross-section regressions separately for the two groups. Table 8 presents the results. Table 8 shows that when negative reviews outnumber positive reviews, the effect of star power on box office returns approaches statistical significance when measured with WONAWARD (p = 1.117, t = 1.56, p = . 12) and is statistically significant in the case of RECOGNITION (P = .224, t = 2.09, p < .05). In each case, BUDGET has a positive, significant effect as well. However, when positive reviews outnumber negative reviews, neither the budget nor any definition of star power has any significant impact on a film's box office revenue. The results imply that star power and budget may act as countervailing forces against negative reviews but do little for films that receive more positive than negative reviews. Discussion and Managerial Implications Critical reviews play a major role in many industries, including theater and performance arts, book publishing, recorded music, and art. In most cases, there is not enough data to identify critics' role in these industries. Are critics good predictors of consumers' tastes, do they influence and determine behavior, or do they do both? Our article sheds light on TABLE 8 Effects of Star Power and Budget on Box Office Revenue When POSNUM - NEGNUM <, 0 When POSNUM -NEGNUM > 0 (i.e., Negative Reviews Outnumber (i.e., Positive Reviews Outnumber Positive Reviews) Negative Reviews) (n = 62) (n = 97) Star Power Is Star Power Is Star Power Is Star Power Is Variable WONAWARD RECOGNITION WONAWARD RECOGNITION Constant 1.540 (1.06) 1.234 (.86) 1.238 (.77) 1.250 (.78) WONAWARD 1.117 (1.56) N.A. .529 (.99) N.A. RECOGNITION N.A. .225 (2.09)** N.A. -.069 (-.95) G -2.372 (-1.86)*** -2.679 (-2.11)** -1.651 (-1.21) -1.451 (-1.05) PG -.131 (-.19) -.340 (-.49) -.522 (-.47) -.436 (-.39) PG-13 -.818 (-1.54) -978(-1.82)*** -.743 (-.69) -.723 (-.67) R —a _a -.503 (-.49) -.387 (-.38) RELEASE -1.358 (-.90) -.779 (-.53) 1.331 (1.15) 1.212 (1.04) SEQUEL -.501 (-.63) -.480 (-.61) 1.531 (1.56) 1.057 (1.10) BUDGET .053 (3.01)* .047 (2.65)* -.030 (-1.49) -.017 (-.82) SCREEN .003 (10.97)* .003(11.09)* .006 (19.03)* .005 (19.00)* WEEK -.447 (-2.20)* -.446 (-2.20)* -.482 (2.23)* -.480 (2.22)* R2 .377 .380 .486 .487 Hausman test for random effects M = 7.37* M = 7.13* M = 8.87* M = 8.25* *p < .01. **p < .05. ***p< .1. aThis set did not have any unrated films and thus dropped the R rating during estimation. Notes: N.A. = not applicable; dependent variable is weekly revenue; method is time-series cross-section regression. The t-values are reported in parentheses. Box Office Effects of Film Critics /115 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions critics' role in the context of a film's box office performance. We further assess the differential impact of positive versus negative reviews and how they might operate jointly with star power and budget. Our first set of results shows that for each of the first eight weeks, both positive and negative reviews are significantly correlated with box office revenue. The pattern is consistent with the dual perspective of critics (i.e., they are influencers and predictors). At the simplest level, this suggests that any marketing campaign for a film should carefully integrate critical reviews, particularly in the early weeks. If studios expect positive reviews, the critics should be encouraged to preview the film in advance to maximize their impact on box office revenue. However, if studios expect negative reviews, they should either forgo initial screenings for critics altogether or invite only select, "friendly" critics to screenings. If negative reviews are unavoidable, studios can use stars to blunt some of the effects by encouraging appearances of the lead actors on television shows such as Access Hollywood and Entertainment Tonight {The Wall Street Journal 2001). Our second set of results shows that negative reviews hurt revenue more than positive reviews help revenue in the early weeks of a film's release. This suggests that whereas studios favor positive reviews and dislike negative reviews, the impact is not symmetric. In the context of a limited budget, studios should spend more to control damage than to promote positive reviews. In other words, there may be more cost effective options than spending money on advertisements that tout the positive reviews. First, studios could forgo critical screenings for fear of negative attention. For example, Get Carter and Autumn in New York did not offer advance screenings for critics, leading Roger Ebert {Guardian 2000) to comment that "the studio has concluded that the film is not good and will receive negative reviews." Second, studios could selectively invite "soft" reviewers. Third, studios could delay sending press kits to reviewers. Press kits generally contain publicity stills and production information for critics. Because newspapers do not run reviews without at least one press still from the film, withholding the kit gives the film an extra week to survive without bad reviews. Our third set of results suggests that stars and budgets moderate the impact of critical reviews. Although star power may not be needed if a film receives good reviews, it can significantly lessen the impact of negative reviews. Similarly, big budgets contribute little if a film has already received positive reviews, but they can significantly lessen the impact of negative reviews. Therefore, in some sense, big budgets and stars serve as an insurance policy. Because success is difficult to predict in the film business (see, e.g., De Vany and Walls 1999), as is the quality of reviews, executives can hedge their bets by employing stars or by using big budgets (e.g., expensive special effects). These actions may not be needed and, on average, may not help returns; however, if critics pan the film, big budgets and stars can moderate the blow and perhaps save the executive's job (Ravid and Basuroy 2003). Implications for Other Industries Although the current analysis applies to the film industry, we believe the results may be applicable to other industries in which consumers are unable to assess the qualities of products accurately before consumption (e.g., theater and performance arts, book publishing, recorded music, financial markets). Critics may influence consumers, or consumers may seek out the critics who they believe accurately reflect their taste (i.e., the predictor role). For example, in urban centers, "theater and dance critics wield nearly life-or-death power over ticket demand" (Caves 2000, p. 189); for Broadway shows, critics appear both to influence and to predict consumers' tastes (Reddy, Swaminathan, and Motley 1998). Similarly, research in the bond market shows that there is little market reaction to bond rating changes when the rating agency simply responds to public information (i.e., the rating agencies simply predict what the public has done already). In contrast, if the rating change is based on projections or inside research, the markets react to the news (see Goh and Ederington 1993). In addition to the role of critics, all the other issues that we have raised in this article (e.g., negativity bias, moderators of critical reviews) should be of significance in other industries as well. For example, bad reviews can doom a publisher's book (Greco 1997, p. 194), but as with films, readers' reliance on the book critics is reduced when the book features a popular author rather than an unknown author (Levin, Levin, and Heath 1997). When enough data are available, there is ample opportunity to extend our framework to assess the revenue returns of such similar creative businesses. REFERENCES Assael, Henry (1984), Consumer Behavior and Marketing Action, 2d ed. Boston: Kent Publishing Company. Austin, Brace (1983), "A Longitudinal Test of the Taste Culture and Elitist Hypotheses," Journal of Popular Film and Television, 11, 157-67. Baltagi, Badi H. (1995), Econometric Analysis of Panel Data. West Sussex, UK: John Wiley & Sons. Boston Globe (2001), "Big Studios Get Creative with Film Promotion," (June 19), El. BusinessWeek (2000), "Can Amazon Make It?" (July 10), 38-43. Cameron, S. (1995), "On the Role of Critics in the Culture Industry," Journal of Cultural Economics, 19, 321-31. Caves, Richard E. (2000), Creative Industries. Cambridge, MA: Harvard University Press. Creative Multimedia (1997), Blockbuster Guide to Movies and Videos. Portland, OR: Creative Multimedia Inc. De Silva, Indra (1998), "Consumer Selection of Motion Pictures," in The Motion Picture Mega-Industry, Barry R. Litman, ed. Needham Heights, MA: Allyn Bacon, 144-71. De Vany, Arthur and David Walls (1999), "Uncertainty in the Movies: Can Star Power Reduce the Terror of the Box Office?" Journal of Cultural Economics, 23 (November), 285-318. Einhorn, Hillel J. and Clayton T. Koelb (1982), "A Psychometric Study of Literary Critical Judgment," Modern Language Studies, 12 (Summer), 59-82. 116 / Journal of Marketing, October 2003 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions Elberse, Anita and Jehoshua Eliashberg (2002), "Dynamic Behavior of Consumers and Retailers Regarding Sequentially Released Products in International Markets: The Case of Motion Pictures," working paper, The Wharton School, University of Pennsylvania. Eliashberg, Jehoshua and Steven M. Shugan (1997), "Film Critics: Influencers or Predictors?" Journal of Marketing, 61 (April), 68-78. Goh, Jeremy C. and Louis H. Ederington (1993), "Is a Bond Rating Downgrade Bad News, Good News, or No News for Stockholders?" Journal of Finance, 48 (5), 2001-2008. Greco, Albert N. (1997), The Book Publishing Industry. Needham Heights, MA: Allyn Bacon. Guardian (2000), "Hollywood Banishes the Critics That Bite," [available at http://www.guardian.co.Uk/Archive/Article/0, 4273,407467 l.OO.html]. Hardie, Bruce G.S., Eric J. Johnson, and Peter S. Fader (1993), "Modeling Loss Aversion and Reference Dependence Effects on Brand Choice," Marketing Science, 12 (4), 378-94. Holbrook, Morris B. (1999), "Popular Appeal Versus Expert Judgments of Motion Pictures," Journal of Consumer Research, 26 (September), 144-55. Hsiao, Cheng (1986), Analysis of Panel Data. New York: Cambridge University Press. Ito, Tiffany A., Jeff T. Larsen, Kyle N. Smith, and John T. Cacioppo (1998), "Negative Information Weighs More Heavily on the Brain: The Negativity Bias in Evaluation Categorization," Journal of Personality and Social Psychology, 75 (October), 887-901. Jedidi, Kamel, Robert E. Krider, and Charles B. Weinberg (1998), "Clustering at the Movies," Marketing Letters, 9 (4), 393^105. John, Kose, S. Abraham Ravid, and Jayanthi Sunder (2002), "The Role of Termination in Employment Contracts: Theory and Evidence from Film Directors' Careers," working paper, Stern School of Business, New York University. Kahneman, Daniel and Amos Tversky (1979), "Prospect Theory: An Analysis of Decision Under Risk," Econometrica, 47 (March), 263-91. Levin, Aron M., Irwin P. Levin, and C. Edward Heath (1997), "Movie Stars and Authors as Brand Names: Measuring Brand Equity in Experiential Products," in Advances in Consumer Research, Vol. 24, Merrie Brucks and Debbie Maclnnis, eds. Provo, UT: Association for Consumer Research, 175-81. Litman, Barry R. (1983), "Predicting the Success of Theatrical Movies: An Empirical Study" Journal of Popular Culture, 17 (Spring), 159-75. -and Hoekyun Ahn (1998), "Predicting Financial Success of Motion Pictures," in The Motion Picture Mega-Industry, Barry R. Litman, ed. Needham Heights, MA: Allyn Bacon, 172-97. -and L.S. Kohl (1989), "Predicting Financial Success of Motion Pictures: The '80s Experience," Journal of Media Economics, 1, 35-50. MPAA (2002), "U.S. Entertainment Industry: 2002 MPA Market Statistics," [available at http://www.mpaa.org/useconomicreview/ 2002/2002_Economic_Review.pdf]. Neelamegham, Ramya and Pradeep Chintagunta (1999), "A Bayesian Model to Forecast New Product Performance in Domestic and International Markets," Marketing Science, 18 (2), 115-36. The New York Times (1999), "Liking America, but Longing for India," (August 6), E2. Prag, Jay and James Casavant (1994), "An Empirical Study of the Determinants of Revenues and Marketing Expenditures in the Motion Picture Industry," Journal of Cultural Economics" 18, 217-35. Ravid, S. Abraham (1999), "Information, Blockbusters, and Stars: A Study of the Film Industry," Journal of Business, 72 (October), 463-92. -and Suman Basuroy (2003), "Beyond Morality and Ethics: Executive Objective Function, the R-Rating Puzzle, and the Production of Violent Films," Journal of Business, forthcoming. Reddy, Srinivas K., Vanitha Swaminathan, and Carol M. Motley (1998), "Exploring the Determinants of Broadway Show Success," Journal of Marketing Research, 35 (August), 370-83. Shaw, Steven A. (2000), "The Zagat Effect," Commentary, 110 (4), 47-50. Skowronski, John J. and Donald E. Carlston (1989), "Negativity and Extremity Biases in Impression Formation: A Review of Explanations," Psychological Bulletin, 105 (January), 17-22. Smith, S.P. and V.K. Smith (1986), "Successful Movies—A Preliminary Empirical Analysis," Applied Economics, 18 (May), 501-507. Sochay, Scott (1994), "Predicting the Performance of Motion Pictures," Journal of Media Economics, 7 (4), 1-20. Tversky, Amos and Daniel Kahneman (1991), "Loss Aversion in Riskless Choice: A Reference Dependent Model," Quarterly Journal of Economics, 106 (November), 1040-61. Vogel, Harold L. (2001), Entertainment Industry Economics, 5th ed. Cambridge, UK: Cambridge University Press. Walker, Chip (1995), "Word of Mouth," American Demographics, 17 (July), 38^4. The Wall Street Journal (2001), '"Town & Country' Publicity Proves an Awkward Act," (April 27), Bl, B6. Wallace, W. Timothy, Alan Seigerman, and Morris B. Holbrook (1993), "The Role of Actors and Actresses in the Success of Films," Journal of Cultural Economics," 17 (June), 1-27. Weiman, Gabriel (1991), "The Influentials: Back to the Concept of Opinion Leaders," Public Opinion Quarterly, 55 (Summer), 267-79. West, Patricia M. and Susan M. Broniarczyk (1998), "Integrating Multiple Opinions: The Role of Aspiration Level on Consumer Response to Critic Consensus," Journal of Consumer Research, 25 (June), 38-51. Wyatt, Robert O. and David P. Badger (1984), "How Reviews Affect Interest in and Evaluation of Films," Journalism Quarterly, 61 (Winter), 874-78. Yamaguchi, Susumu (1978), "Negativity Bias in Acceptance of the People's Opinion," Japanese Psychological Research, 20 (December), 200-205. Box Office Effects of Film Critics /117 This content downloaded from 147.251.185.122 on Wed, 26 Feb 2014 07:43:58 AM All use subject to JSTOR Terms and Conditions