® Check for updates 21 Analyzing Talk and Text I: Qualitative Content Analysis Manuel Puppis Introduction Turning talk and tact into research results requires some form of data analysis. Whether we deal with interviews (see Chapter 9 by van Selm & Helberger and Chapter 10 by Van Audenhove & Donders), group discussions or focus groups (see Chapter 11 by Lunt), observations (see Chapter 12 by Jackson & Glowacki), documents (see Chapter 14 by Karppinen & Moe) or meta-analysis (see Chapter 6 by Meier): Following its collection, qualitative data first needs to be prepared for analysis and then 'examined and interpreted in order to elicit meaning, gain understanding and develop empirical knowledge' (Bowen, 2009, p. 27). In the social sciences, especially in German-speaking countries, qualitative content analysis is one of the most widely used methods for analyzing qualitative data. Qualitative content analysis promises to be of relevance for media and communication policy research as well, given that the field often makes use of documents and interviews with policy-makers and industry representatives (Just & Puppis, 2012). The method offers a systematic, step-by-step approach to interpreting interview transcripts, observation notes and documents. This chapter aims at showing what qualitative content analysis is and how it can be used for media policy research. Given its origin in German-language social sciences, the method is internationally less well known than other qualitative methods of data analysis. M. Puppis (El) Department of Communication and Media Research, University of Fribourg, Fribourg, Switzerland © The Author(s) 2019 367 H. Van den Bulck et al. (eds.), The Palgrave Handbook of Methods for Media Policy Research, https://doi.org/10.1007/978-3-030-16065-4_21 368 M. Puppis The chapter starts by defining qualitative content analysis and shortly presenting the methods development. It then focuses on strengths and weaknesses of the method before delving into the different steps of analysis. Definition, Logic and Rationale Qualitative content analysis has its roots in quantitative content analysis which was developed in communication studies to analyze media content (Berelson, 1952). As the disciplines 'original' empirical method, quantitative content analysis plays a major role to this day (Fürst, Jecker, & Schönhagen, 2016; Nawratil & Schönhagen, 2009). While quantitative approaches were criticized early on (Kracauer, 1952), use of the method flourished and a codification of a qualitative alternative took place much later and especially in German-speaking countries with Philipp Mayring's book on qualitative content analysis first published in the early 1980s. Due to this historical development, 'qualitative content analysis has not been well known as a method in its own right in most English-speaking countries until recently (Schreier, 2014, p. 172). Qualitative content analysis is a method of data analysis. It starts once data collection is finished and analyzes texts that were either created by the researchers themselves (e.g. transcripts from interviews and group discussions; observation notes) or that exist irrespective of research activities (e.g. documents created as part of policy-making and by media organizations; media content; prior empirical studies for qualitative meta-analysis). This also emphasizes that qualitative content analysis, in contrast to its quantitative predecessor, is not primarily used to analyze media content but applied to all kinds of texts. Like other qualitative methods of data analysis in the social sciences, qualitative content analysis does not treat texts as an object of analysis themselves but as a 'window into human experience' (Ryan & Bernard, 2000, p. 769). Mayring (2014, p. 39) stresses the importance of always interpreting texts within their 'communicative context', meaning that texts can only be understood in light of their creation and purpose and that their analysis aims at making an inference beyond the text. Moreover, qualitative content analysis is not only useful for investigating the manifest meaning of texts but also for analyzing latent meanings, what is omitted and the context of texts (Mayring, 2002; Schreier, 2014). According to Mayring (2014, p. 10), the 'central idea of Qualitative Content Analysis is to start from the methodological basis of Quantitative Content Analysis but to conceptualize the process of assigning categories to 21 Analyzing Talk and Text I: Qualitative Content Analysis 369 text passages as a qualitative-interpretive act, following content-analytical rules'. It is thus a method for systematic analysis of qualitative data by way of assigning categories to text material (Fürst et al., 2016; Julien, 2008; Mayring & Hurst, 2017; Schreier, 2014). The method's systematic approach can be seen as the main reason for its popularity. Moreover, Mayring (2014) describes qualitative content analysis as a mixed-methods approach that is also open to quantitative steps of analysis, e.g. the frequencies of categories. However, this is by far no necessity. Critical Assessment of the Method For media and communication policy research, qualitative content analysis is a highly useful method of data analysis. Given the multitude of empirical studies interested in media policy-making and regulation that use interviews and documents for data collection, methods for analyzing such qualitative data are elementary for the research field. As one such method, qualitative content analysis offers a number of advantages. First, due to its roots in quantitative content analysis, it offers a way to systematically analyze data. While qualitative content analysis shares many features with other qualitative methods of data analysis 'such as the concern with meaning and interpretation of symbolic material, the importance of context in determining meaning, and the data-driven and partly iterative procedure' (Schreier, 2014, p. 173), its rule-based step-by-step approach adjusted from quantitative content analysis is a main differentiator (Mayring, 2014). Although the codebook is not standardized but developed for each research project anew, the process remains the same. Following a sequence of predefined steps does not preclude an iterative process and it explicitly allows for 'going through some of these steps repeatedly (Schreier, 2014, p. 171). Yet in contrast to other forms of qualitative data analysis, the step-by-step approach of qualitative content analysis is more restrictive when it comes to coding. Qualitative content analysis separates the phases of code development and application (Schreier, 2014), meaning that there is a trial phase (or pretest) and every subsequent change of the codebook requires to go through the complete text material again. Moreover, categories should be mutually exclusive. The systematic approach guarantees that every single part of the text material is thoroughly analyzed, and that the analysis is intersubjectively comprehensible (Alemann & Tönnesmann, 1995; Mayring, 2014; Mayring & Hurst, 2017). Moreover, the step-by-step approach and the use of a codebook allow for evaluating the quality of data analysis, most 370 M. Puppis importantly whether different researchers (intercoder reliability) or the same researcher at different points in time (intracoder reliability) code the text in the same way (Mayring, 2014). Despite this close relationship to quantitative content analysis, it is a truly qualitative method. Other than its quantitative counterpart which is often criticized for being limited to manifest meaning, qualitative content analysis is well-equipped for analyzing latent meanings, understanding texts within their social context, and scrutinizing aspects omitted in texts (Mayring, 2002; Schreier, 2014). Second, qualitative content analysis helps reducing the vast text material scholars aim at analyzing. On the one hand, the codebook requires 'the researcher to focus on selected aspects of meaning, namely those aspects that relate to the overall research question' (Schreier, 2014, p. 170). On the other hand, during the analysis the level of abstraction is raised, meaning that categories ultimately apply to more than one particular text segment. Third, the method allows for both deductive (theory-driven) and inductive (data-driven) code development. It is thus suitable whether researchers prefer to develop their categories from theory or out of their text material. Usually, studies involve a combination of theory-driven and data-driven categories within one codebook. Most projects taking a deductive approach will develop at least some categories out of the text material. Likewise, scholars that aim at inductively developing categories will already have an understanding of what they are looking for in the material thanks to theory and state of research (Nawratil & Schonhagen, 2009; Schreier, 2014). The limitations of qualitative content analysis are—well—limited. To begin with, Schreier (2014, p. 181) argues that the method is mainly descriptive: "This implies that the material is taken "for granted"; the method is, so to speak, ontologically and epistemologically "naive"'. As a result, the method would not be suitable for theory-building. Yet this criticism only applies to so-called literal readings of texts that restrict the analysis to literal content. Qualitative researchers usually would not stop there and aim at an interpretative reading, that is 'reading through or beyond the data in some way (Mason, 2002, p. 149) to find out what they can infer from data. And while documents exist irrespective of the researcher, when doing interviews, group discussions or observations, scholars might also aim at a reflexive reading that explores their own role in the generation of data. In both interpretative and reflexive readings, scholars are aware that the social world is constructed and that talk and text cannot be taken at face value. Consequently, qualitative content analysis goes way beyond a description of data. What is more, Mayring s seminal accounts of the method are not always clearly laid out. On the one hand, the differentiation into three techniques 21 Analyzing Talk and Text I: Qualitative Content Analysis 371 of qualitative content analysis and his association of deductive and inductive code development with one of them exclusively is counterintuitive (see below). On the other hand, Mayring's work mainly discusses the technicalities of coding text material, staying oddly silent on the interpretation of extracted text segments. Several scholars have thus begun to stress the interplay of deductive and inductive code development and to also focus on the steps necessary for interpreting content (Fürst et al., 2016; Nawratil & Schönhagen, 2009; Puppis, 2009). As with every method, qualitative content analysis also merits a short discussion of research ethics. It is always crucial to question the implications for individuals involved in or affected by research. Given that qualitative content analysis is a method of data analysis, most ethical questions come up during earlier phases of the research process (sampling and data collection) or during the writing process. When using nonpublic documents, permission is necessary. With respect to interviews, ethics is not just about who you ask, what you ask or how you ask it, but also whether interviewees are revealing more than they should and whether we as researchers can guarantee confidentiality and anonymity (Mason, 2002). Especially in interviews with experts and elites, interviewees will be recognized by others in the field. Informed consent to the interview and its recording, as well as approval for quotes used in research reports and publications, is imperative. Irrespective of the method of data collection, during writing researchers also need to think about the consequences of how they quote and portray interviewees, observed persons or authors of documents. Planning and Conducting Mayring (2014) distinguishes three different techniques of qualitative content analysis, namely summarization, explication and structuring. • Summarization (or summary) attempts to reduce the text material to its essential content by several steps of paraphrasing and generalizing. According to Mayring, this technique is connected to inductive category development. • Explication aims at clarifying text segments that are unintelligible by consulting additional material, either text passages related to the segment in question within the same text or further information on the author and the context of creation. 372 M. Puppis Step 1: Research Question, Theory and State of Research i Step 2: Choosing Method of Data Collection and Sampling I Step 3: Data Collection I Step 4: Data Preparation Step 5: Coding Step 5a: Determining Unit of Analysis Step 5b: Deductive Development of Codebook Step 5b: Definition of Selection Criteria Step 5c: Trial Coding Step 5c: Inductive Development of Categories Step 5d: Revision of Categories and Coding Guidelines Step 5e: Coding of the Complete Text Material (Structuring) Step 5f: Extraction of Text Segments Step 5g: Summarization of Text Segments i Step 6: Interpretation Step 6a: Thematic Comparison Step 6b: Theoretical Generalization I Step 7: Presentation of Results Fig. 21.1 Steps of Qualitative Content Analysis (Source Own depiction inspired by Mayring [2014], Mayring and Hurst [2017], and Nawratil and Schonhagen [2009]) 21 Analyzing Talk and Text I: Qualitative Content Analysis 373 • Structuring attempts at systematically extracting particular aspects from the text material. Mayring puts this technique in connection with deductive category assignment. This differentiation, as well as the linkage of techniques with either inductive or deductive coding, have been criticized. Scholars argue that one of the strengths of qualitative content analysis is the combination of theory-driven (deductive) and data-driven (inductive) code development and that the three techniques are mosdy used in combination (Fürst et al., 2016; Nawratil & Schönhagen, 2009; Schreier, 2014). The step-by-step approach (see Fig. 21.1) presented in the following is thus based on the idea that every qualitative content analysis first involves coding (a structuring of text material according to inductively and/or deductively developed categories) which allows for the extraction of relevant text segments that will then be summarized before moving to their interpretation (Fürst et al., 2016; Meuser & Nagel, 2009; Nawratil & Schönhagen, 2009; Puppis, 2009). Research Question, Theory and Sampling The first steps of qualitative content analysis take place before data are ready for the actual analysis. They involve theoretical work and sampling. Irrespective of the methods of data collection and data analysis, empirical research projects first require formulating research question(s), dealing with theories that help in understanding the object of research and processing the current state of empirical research on the topic (step 1). Qualitative content analysis is no exception, no matter whether code development will be mainly theory-driven (deductive) or data-driven (inductive). Theories are also critical for interpreting the text material. Depending on the research question(s) and the aim of the project, scholars then need to choose a method of data collection and perform a sampling procedure (step 2). Whereas interviews, group discussions (focus groups) and observations imply that data will be generated by the researcher(s), documents and media content already exist. Yet in both cases, it is necessary to determine the material that will be analyzed, e.g. which representatives of which organizations will be interviewed, or which documents will be collected. In qualitative research, sampling is not based on the idea of representativeness. Rather, so-called theoretical sampling aims at choosing the material necessary to answer the research questions (Mayring, 2014; Nawratil & Schönhagen, 2009). 374 M. Puppis Data Collection Subsequently, data collection begins (step 3). Interviews, group discussions or observations need to be performed (including the development of an interview or field guide, making the appointments and doing the actual interviews, discussions or observations); documents or media coverage have to be acquired. The chapters in Part III of this handbook deal with various methods of data collection. In line with an interpretative and reflexive reading of qualitative data, researchers should be aware that data collection does not simply mean to excavate facts (Mason, 2002). As Brinkmann and Kvale (2015) put it, researchers are not miners who collect knowledge but travelers that are involved in (re) constructing knowledge. While this might seem obvious with respect to interviews or observations, documents are direct representations of 'reality' or 'factual records' neither but always constructed 'in particular contexts, by particular people, with particular purposes, and with consequences—intended and unintended' (Mason, 2002, p. 110). Hence, they need to be critically assessed before analysis (see Chapter 14 by Karppinen & Moe). This so-called 'source criticism' aims at asking critical questions about the nature of documents that are used for research in order to be able to contextualize them during interpretation. In short, it is necessary to (a) determine the existence and accessibility of documents, (b) describe the documents (type of document, context of creation, authors, addressees, state of preservation), (c) discuss the authenticity of documents and the version(s) at hand, and, most importantly, (d) look into the purpose of each document and the intentions of the author, including their view of reality, selectivity and strategy (Bowen, 2009; Mason, 2002; Reh, 1995; Scott, 1990). Data Preparation Once data collection is finished, the recordings of interviews or group discussions, notes from observations, and collected documents or media coverage need to be prepared for analysis (step 4). Interviews and group discussions require transcription to turn spoken into written language (see also Chapter 9 by van Selm & Helberger and Chapter 10 by Van Audenhove & Donders). Both recording and transcribing interviews imply the loss of information as they focus on some aspects of an interaction only. Media and communication policy research is normally interested in the content of conversations— what is said and what is not—and less in semantics and paraverbal aspects. 21 Analyzing Talk and Text I: Qualitative Content Analysis 375 Thus, interviews and group discussions are usually transformed into a 'clean-read verbatim transcript', i.e. a text that presents the complete conversation word for word but without hesitation sounds and correcting sentence structure (Brinkmann & Kvale, 2015; Mayring, 2014). Coding 'Coding is the heart and soul of whole-text analysis' (Ryan & Bernard, 2000, p. 780), including qualitative content analysis, and precedes interpretation. Basically, information is organized into categories related to the research questions (Bowen, 2009). Box 21.1: Empirical Example: Press Councils as Self-Regulatory Organizations (Puppis, 2009) Press councils are a cornerstone of media self-regulation in many countries. Departing from sociological institutionalism, the study conceptualized press councils as organizations that need to deal with pressures from their environments to receive legitimacy required for survival. It was interested in the research questions of how these self-regulators are organized and of how their structures and practices can be explained and whether they are a response to institutional requirements. In the first part of the study, the structures and practices of all press councils in EU and EFTA member states were investigated using publicly available documents that were analyzed using a deductively developed codebook. In the second part of the study, an in-depth analysis of the development of press councils in the United Kingdom, Germany, Ireland and Switzerland was performed. Categories were developed inductively out of both qualitative interviews and archival documents so that themes could emerge across the different sources. Assigning categories to segments of text requires determining the basic unit of analysis (step 5a), e.g. entire texts, grammatical segments (sentences, paragraphs), formatting units (pages, rows) or thematic units, i.e. segments of texts that reflect a single theme (Mayring, 2014; Meuser & Nagel, 2009; Ryan & Bernard, 2000). For media and communication policy research interested in the content of texts, thematic units are especially useful. Thematic units were also used as units of analysis in both parts of the study on press councils that serves as an example of how to use qualitative content analysis (see Box 21.1). Next, the actual code development and preliminary coding begins. Whether categories are mainly developed deductively or inductively, some general rules apply. While main categories indicate the aspects the researcher(s) are interested in, subcategories specify 'what is said in the 376 M. Puppis material with respect to these main categories' (Schreier, 2014, p. 174). Subcategories within each main category should be mutually exclusive, meaning that while each unit of analysis can be coded more than once, it can only be coded once under a particular main category. In addition, all relevant segments of text should be covered by at least one subcategory (Fürst et al., 2016; Schreier, 2014). Many research projects combine deductive and inductive coding. For instance, main categories are often deductively created whereas the subcategories (and additional main categories) are then inductively developed (Fürst et al., 2016). Even if this is not the case, both techniques always include a theory- and a data-driven element. In case of deductive code development, a codebook (sometimes also called analytical framework) needs to be developed that contains all the main categories and subcategories drawn from theory and prior research (step 5b). The codebook also includes coding guidelines for each category, namely a definition or description of the category, a real-text example, and (if necessary) rules that help in deciding to which subcategory a segment of text should be assigned (Kohlbacher, 2006; Mayring, 2014; Nawratil & Schönhagen, 2009; Ryan & Bernard, 2000; Schreier, 2014). This helps in guaranteeing that the coding is intersubjectively comprehensible and that subcategories within a main category are mutually exclusive (Fürst et al., 2016). For example, the study interested in the organization of press councils developed a codebook out of existing literature on press councils. This codebook was then used to code all relevant documents that contain information about three main categories, namely the structure of the organization, the internal body dealing with complaints and the process of handling complaints. For each main category several subcategories were developed deductively, e.g. 'sanctions' (see excerpt in Table 21.1). Table 21.1 Excerpt from a deductively developed codebook Main Subcategory Definition Example Rules category Process of Sanctions Measures that "When requested Text segments handling the press council by the press that describe complaints may take in case council to do so, what media a complaint is the press shall found in uphold publish the deci- breach of the sion in relation code of ethics to a complaint are obligated with due to do prominence" 21 Analyzing Talk and Text I: Qualitative Content Analysis 377 The development of the codebook is followed by a trial coding of text material to see whether the categories are exhaustive and practicable (step 5c). Additional categories may be created inductively out of the text material whenever relevant aspects in the text material cannot be assigned to preexisting categories (Nawratil & Schonhagen, 2009). In case of inductive code development, the analysis starts by defining selection criteria that focus code development on certain aspects of the text material (step 5b). As a deductive element, the research question(s) and the theoretical framework of the study offer guidance as to what is of interest in the analyzed data (Kohlbacher, 2006; Mayring, 2014; Nawratil & Schonhagen, 2009). Based on the theory of sociological institutionalism, the study on press councils was interested in the circumstances of a press councils formation, reasons offered for the implementation of structures and practices in place and whether there is a connection between perceived expectations toward the organization from its environment on the one hand and the actual organizational design on the other. Thus, the analysis of interviews and documents was focused on these three aspects. Subsequently, inductive coding starts (step 5c). The text material is worked through line by line and the first time a segment of text fits the selection criteria, it will be assigned to a category. Categories should be formulated based on the actual terminology used in the text. The next time a segment of text fits the selection criteria, either a new category will be formulated, or it will be subsumed under an already existing one (Mayring, 2014; Meuser & Nagel, 2009; Nawratil & Schonhagen, 2009). In case that new categories refine, split or narrow existing ones, the previously coded text material needs to be reanalyzed (Bowen, 2009). This initial coding continues until no new categories are expected. In the study on press councils, interviews and internal documents were coded line by line. Given the selection criteria, the analysis focused on environmental influences on organizational structures like, for instance, the existence of lay members. In some documents and interviews it was argued that having lay members was inevitable because otherwise the organization would not be trusted by the public. These segments of text were coded with the category 'public trust' (see Table 21.2). Both theory- and data-driven coding sometimes might require what Mayring (2014) calls explication, i.e. the consultation of contiguous text passages or additional material to better understand incomprehensible text segments (Nawratil & Schonhagen, 2009). After this initial step of coding, the categories and the coding guidelines have to be revised (step 5d). Thus, the results of the coding done so far are discussed to check whether all coders (intercoder reliability) or the same 378 M. Puppis Table 21.2 Example of inductive coding Category Text segment Public trust "A self-regulatory body [...] must both be representative and impartial and appear to be so because you have to live in the public domain. [...] You've got to be seen to be doing what you are doing as well as doing it. [...] [We] could in theory work if it was just editors. But in fact, nobody would trust it. Therefore, in practice, it couldn't work. You have to have public members represented" coder at different points in time (intracoder reliability) made the same coding decisions and to find solutions where there is disagreement (van den Hoonaard, 2008). Usually, this is the case after working through 10-50% of the text material. In case of deductive coding, the revision, on the one hand, aims at combining subcategories that are too close to each other and at adding new ones that were inductively developed. On the other hand, coding guidelines are adjusted to make sure that all coders assign categories in the same way (Mayring, 2014). In case of inductive coding, researchers need to check whether the selection criteria actually help to focus on text material that is relevant for answering the research questions(s) and whether categories are mutually exclusive, i.e. that there is no overlap. Subcategories can also be arranged under main categories (Mayring, 2014). In the study on press councils, the inductively developed categories were revised to prevent overlap. Specifically, categories that were too close to each other were merged. Moreover, categories were also grouped and assigned to main categories. For instance, public trust' was assigned to the main category 'influence of environment'. As with deductive coding, the consistency of different coders' decisions need to be discussed as well (Mayring, 2014). Once the categories and coding guidelines have been finalized, the coding (or structuring) of the whole text material starts (step 5e). Given that revisions to the codebook have been made, the coding procedure needs to start over from the beginning (Mayring, 2014). As before, the material is worked through line by line, segments of texts are assigned to the deductively or inductively developed categories and sometimes an explication of incomprehensible segments might become necessary. It is still possible to inductively add new categories. However, with each new category the already coded text material needs to be checked if there could be another segment of text fitting the new category or whether previous code assignments require revision (Bowen, 2009; Mayring, 2014; Nawratil & Schonhagen, 2009). Once this step is finished, all relevant text segments have been assigned to categories. The categories basically 'act as tags to mark off text in a corpus 21 Analyzing Talk and Text I: Qualitative Content Analysis 379 for later retrieval' (Ryan & Bernard, 2000, p. 782). And this is the next step: extracting the coded text segments (step 5f)- With this step, data analysis goes beyond individual coded text segments by detaching them first from the sequentiality of each analyzed text and second from the single texts themselves. This means that text segments that were assigned the same category are compiled within each text and then across the complete text material. During this double process of extraction, categories that are redundant will be combined (Meuser & Nagel, 2009; Puppis, 2009). Coding and extraction can be done manually, either on paper or using regular word processing software, by writing down the category next to a relevant text segment and then copy-pasting the coded text segments into new files. During extraction, it is pertinent to keep track of which interview or document a text segment is taken from. It is a rather tiring exercise and easily becomes complex. Thus, in projects working with large amounts of interview transcripts, observation notes and/or documents, the use of specialized software is recommended (see Chapter 25 by Mortelmans). Among the more popular ones are MAXQDA, ATLAS.ti and NVTVO. Moreover, with QCAmap, Mayring offers a web app for qualitative content analysis (www.qcamap.org). The study of press councils used MAXQDA to code and extract the text segments assigned to the various categories. The software then allows for exporting new text files per category (e.g. 'public trust') that always record the source of text segments. Within each category, the text material can now be reduced by summarizing text segments that are identical with respect to their content (step 5g). As a result, different aspects or viewpoints coded under one category become more visible (Fürst etal., 2016; Mayring, 2014; Nawratil & Schönhagen, 2009). While summarization is useful for reducing the text material, researchers should be careful to keep illustrative quotes from interviews and documents that they want to use during the writing process. Interpretation Now that the complete text material (e.g. interview transcripts, observations notes and/or documents) has been coded, extracted and summarized per category, interpretation starts. Only through interpretation data turn into results that can be discussed in light of research question(s) and theory. Thematic comparison (step 6a) aims at discovering similarities and differences and identifying patterns within each category. For finding patterns and a story to tell, it can be helpful to think about the background of authors, organizations or interview partners, e.g. their country, industry or political 380 M. Puppis affiliation. Yet it is important to be open toward unanticipated insights (Alemann & Tönnesmann, 1995; Bowen, 2009; Fürst et al., 2016; Puppis, 2009). Finally, the so-called theoretical generalization (step 6b) systematically arranges the categories in relation to each other and puts them into a theoretical context, explaining what the results mean and how they advance our knowledge about theories and the research object. In projects making use of inductive code development, prior to this it is also necessary to translate categories based on the terminology used in the texts into theoretical terms (Meuser & Nagel, 2009). Coming back to the study of press councils and the example of needing lay members because of public trust, first all the text segments assigned to this category were compared to each other to find similarities and differences in organizational responses (i.e. did the publics expectation lead to a change in organizational structures and practices). Then, the inductively developed categories were translated into theoretical terms. Sociological institution-alism distinguished three environmental influences on organizations: tak-en-for-grantedness (cultural-cognitive), moral obligations (normative) and coercion (regulative). Expectations of the public would qualify as moral obligations. Yet organizations do not always acquiesce to environmental requirements but can also try to compromise, avoid implementation by concealing their nonconformity, defy requirements or change expectations of their stakeholders. Thus, the decision to have lay members or not was connected to these different organizational responses. For instance, in one case lay members were accepted (acquiescence), in another implemented only to avoid more drastic changes (compromise), and in yet another openly resisted (defiance). Finally, for theoretical generalization the different environmental influences and organizational responses from all categories were brought together (see Table 21.3). Table 21.3 Result of interpretation Regulative Normative Cultural-cognitive Acquiescence PCC, PCI, SPR DPR, PCC, PCI, SPR DPR, PCC, PCI, SPR Compromise DPR, PCC, PCI, SPR DPR, PCI, SPR Avoidance - - - Defiance PCC, PCI DPR, PCC Manipulation DPR, PCI, SPR DPR, PCC, PCI, SPR - SPR Swiss Press Council; DPR German Press Council; PCI Press Council of Ireland; PCC Press Complaints Commission 21 Analyzing Talk and Text I: Qualitative Content Analysis 381 Mayring (2014) stresses that qualitative content analysis not only allows for interpretation but also for quantitative analysis, e.g. of the frequencies of categories. However, the usefulness of frequencies in media and communication policy research dealing with a limited number of cases is another question. Presentation of Results Data analysis is followed by the presentation of results (step 7), although in qualitative research interpretation and presentation are closely linked (Nawratil & Schonhagen, 2009). First, researchers need to know which story they want to tell. 'Without knowing the "what" and the "why" of the story, the "how"—the form of the story—becomes problematic' (Brinkmann & Kvale, 2015, p. 304). Results should be structured around themes that emerged from theory and/or analysis which allows for putting them into context and for discussing what we learn from the study (Jackson, 2000). Few standards exist regarding the 'how'. As some general rules, scholars should avoid lengthy and tiresome reports that make extensive use of verbatim quotes but tell a captivating story supported by their data. Only quotes that are particularly meaningful and illustrative should be used. Moreover, quotes always need to be put into context. Although qualitative data are of textual nature, scholars can make use of tables to present the most important results (e.g. patterns found during interpretation) or even try to visualize their data (Brinkmann & Kvale, 2015). Readers will appreciate it. Conclusion Documents, interviews, group discussions and observations are invaluable data sources when interested in the media policy-making process and the actors involved or when looking into governance options and media regulation. And qualitative content analysis is an essential method for making sense of such data. Thanks to its rule-based step-by-step approach to both coding and interpretation, qualitative document analysis is characterized by its intersubjec-tive comprehensibility and the possibility to evaluate the quality of research, most importandy intercoder reliability. Coding guidelines, trial codings and revisions of the codebook as well as discussions of coding decisions should 382 M. Puppis ensure that each coder assigns text segments to categories in the same way. Moreover, the method is easy to learn and allows for both inductive and deductive code development (as well as combinations). With qualitative content analysis, scholars of media and communication policy research have one more enticing method for data analysis in their research toolbox. References Alemann, U. v., & Tönnesmann, W. (1995). Grundriss: Methoden in der Politikwissenschaft. In U. v. Alemann (Ed.), Politikwissenschafiliche Methoden. Grundriss fiir Studium und Forschung (pp. 17—140). Opladen: Westdeutscher Verlag. Berelson, B. (1952). Content analysis in communication research. Glencoe: The Free Press. Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal 9(2), 27-40. https://doi.org/10.3316/QRJ0902027. Brinkmann, S., & Kvale, S. (2015). Interviews: Learning the craft of qualitative research interviewing (3rd ed.). Los Angeles: Sage. Fürst, S., Jecker, G, & Schönhagen, P. (2016). Die qualitative Inhaltsanalyse in der Kommunikationswissenschaft. In S. Averbeck-Lietz & M. Meyen (Eds.), Handbuch nicht standardisierte Methoden in der Kommunikationswissenschaft (pp. 209-225). Wiesbaden: Springer VS. Jackson, P. (2000). Writing up qualitative data. In D. Burton (Ed.), Research training for social scientists (pp. 244—252). London: Sage. Julien, H. (2008). Content analysis. In L. M. Given (Ed.), The Sage encyclopaedia of qualitative research methods (pp. 121—122). Thousand Oaks: Sage. https://doi. org/10.4l35/978l4l2963909.n65. Just, N., & Puppis, M. (2012). Communication policy research: Looking back, moving forward. In N. Just & M. Puppis (Eds.), Trends in communication policy research: New theories, methods and subjects (pp. 9-29). Bristol and Chicago: Intellect. Kohlbacher, F. (2006). The use of qualitative content analysis in case study research. Forum Qualitative SozialforschunglForum: Qualitative Social Research, 7(1). http://www.qualitative-research.net/index.php/fqs/article/view/75/153. Kracauer, S. (1952). The challenge of qualitative content analysis. Public Opinion Quarterly 16(4), 631-641. Mason, J. (2002). Qualitative researching (2nd ed.). London; Thousand Oaks; and New Delhi: Sage. 21 Analyzing Talk and Text I: Qualitative Content Analysis 383 Mayring, P. (2002). Einfuhrung in die qualitative Sozialforschung (5th ed.). Weinheim and Basel: Beltz. Mayring, P. (2014). Qualitative content analysis: Theoretical foundation, basic procedure and software solution, https://www.ssoar.info/ssoar/bitstream/handle/ document/39517/ssoar-2014-mayring-Qualitative_content_analysis_theoreti-cal_foundation.pdf. Mayring, P., & Hurst, A. (2017). Qualitative Inhaltsanalyse. In L. Mikos & C. Wegener (Eds.), Qualitative Medienforschung. Ein Handbuch (2nd ed., pp. 494-502). Konstanz and München: UVK. Meuser, M., & Nagel, U. (2009). The expert interview and changes in knowledge production. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing experts (pp. 17—42). Basingstoke: Palgrave Macmillan. Nawratil, U., & Schönhagen, P. (2009). Die qualitative Inhaltsanalyse: Rekonstruktion der Kommunikationswirklichkeit. In H. Wagner (Ed.), Qualitative Methoden in der Kommunikationswissenschaft (pp. 333—346). Baden-Baden: Nomos. Puppis, M. (2009). Organisationen der Medienselbstregulierung. Köln: Halem. Reh, W. (1995). Quellen- und Dokumentenanalyse in der Politikfeldforschung: Wer steuert die Verkehrspolitik? In U. v. Alemann (Ed.), Politikwissenschaftliche Methoden. Grundriss fur Studium und Forschung (pp. 201-259). Opladen: Westdeutscher Verlag. Ryan, G. W, & Bernard, H. R. (2000). Data management and analysis methods. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 769—802). London; Thousand Oaks; and New Delhi: Sage. Schreier, M. (2014). Qualitative content analysis. In U. Flick (Ed.), The Sage handbook of qualitative data analysis (pp. 170—183). London: Sage. https://doi. org/10.4135/9781446282243. Scott, J. (1990). A matter of record: Documentary sources in social research. Cambridge: Polity Press, van den Hoonaard, W. C. (2008). Inter- and intracoder reliability. In L. M. Given (Ed.), The Sage encyclopaedia of qualitative research methods (p. 446). Thousand Oaks: Sage, https://doi.org/10.4l35/978l4l2963909.n223. Further Reading Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal 9{2), 27-40. https://doi.org/10.3316/QRJ0902027. Mayring, P. (2014). Qualitative content analysis: Theoretical foundation, basic procedure and software solution, https://www.ssoar.info/ssoar/bitstream/handle/ document/395171ssoar-2014-mayring-Qualitative_content_analysis_theoreti-cal_foundation.pdf. 384 M. Puppis Meuser, M., & Nagel, U. (2009). The expert interview and changes in knowledge production. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing experts (pp. 17—42). Basingstoke: Palgrave Macmillan. Nawratil, U., & Schönhagen, P. (2009). Die qualitative Inhaltsanalyse: Rekonstruktion der Kommunikationswirklichkeit. In H. Wagner (Ed.), Qualitative Methoden in der Kommunikationswissenschaft (pp. 333-346). Baden-Baden: Nomos.