• Sonuç bulunamadı

CHAPTER 4 METHODOLOGY

4.6. Data Analysis

4.6.1. Content Analysis

objectively and systematically identifying specified characteristics of messages”.

Similarly, Cole (1988) summarized content analysis as a “research method for analyzing written, verbal and visual communication messages”. As summarized by Downe-Wamboldt (1992:314) content analysis is a striking technique that is used by many researchers with an aim “to provide knowledge and understanding of the phenomenon under study”. Identified by many scholars, content analysis is a technique that may be used in a content examination which gathered from a variety of sources as such; written texts, speeches, interview transcripts, observations, advertisements, academic databases, campaigns, images, photographs, news, articles, videos, sounds, audios, graphics, social media accounts, online sources, websites, forums, blogs, and print media (Kondracki et al., 2002; Neuendorf, 2002; Mayring, 2004; Scheufele, 2008; Stemler, 2015; Neuendorf and Kumar, 2015). In addition to this, the process of content analysis has varied and also divided into two processes namely as follows;

quantitative content analysis and qualitative content analysis. Respectively, quantitative content analysis was explained by Berelson (1952:18), who is known as a father of content analysis, as “a research technique for the systematic, objective, and quantitative description of the manifest content of communication”. According to Scheufele (2008), the quantitative content analysis technique adopts a deductive approach to measure the data quantitatively. Similarly defined by Morgan (1993) quantitative content analysis is a technique that is also referred to as quantitative analysis of qualitative data. In contrast to this, qualitative content analysis is associated with qualitative data which concerns the meanings of the content and words rather than numbers (Elo et al., 2014). As clarified by Weber (1990) qualitative content analysis method acts as a crucial role in the classification of a large amount of data into an efficient number of categories. According to Elo and Kyngäs (2008), both inductive and deductive content analysis techniques include the same phases which start with the preparation of data, continue with organizing, and eventually conclude with reporting. But the purposes of these two approaches differ from each other. As added by the same authors; deductive content analysis is generally used for retesting the hypothesis or theory which has already exist in related literature. In opposite to this, inductive content analysis is preferred by the researcher when there is a lack of the issue or phenomenon within the existing literature (Lauri and Kyngäs, 2005). Not only purposes but also steps are different in the application of these approaches. More precisely, inductive content analysis is comprised of three distinctive steps as

demonstrated in Figure 4.3. These steps are; open coding, creation of categories, and abstraction (Elo and Kyngäs, 2008). As being one of the crucial processes of analyzing textual content (Khandkar, 2009); open coding is the first step of inductive content analysis.

Figure 4.3. Steps of Inductive Content Analysis Source: Elo and Kyngäs (2008)

In this step, gathered text segments are allocated to codes in the coding scheme which has already been developed by the researcher. Then, the researcher takes some notes while reading the textual data and accordingly grouping the possible headings carefully. Secondly, the researcher creates categories according to the previous step.

Lastly, abstraction is done by the researcher as a final step in order to formulating a general description of the topic (Robson, 1993). To sum up, inductive content analysis begins with open coding which creates subcategories (codes) at first. Then these subcategories lead to the creation of categories which are also known as generic categories. And finally, these generic categories constitute the main categories (themes) as well (Dey, 2003; Kyngäs, 2020). Similarly, Erlingsson and Brysiewicz (2017) proposed this ordering as code, category, and theme respectively. On the other hand, deductive content analysis is comprised of three different steps similar to inductive content analysis (Figure 4.4.). The development of a categorization matrix is the first step of this analysis technique (Vimal and Subramani, 2017). As a second step, data is coding according to the categories. Eventually, a review or maybe comparison with earlier studies is done by the researcher to retesting the existing data.

As it has been seen that, the inductive content analysis describes the movement of data from specific to general when the data in deductive content analysis conversely move from general to specific (Burns and Grove, 2001).

Step 1: Open Coding

Step 2: Creation of Categories

Step 3:

Abstraction

Figure 4.4. Steps of Deductive Content Analysis Source: Elo and Kyngäs (2008)

Reliability is another essential issue for either inductive or deductive content analysis.

In this regard, intercoder reliability (also known as; interrater) is the process that exists in the concept of content analysis to enable the reliability for researchers in their studies (Tinsley and Weiss, 1975). Basically, intercoder reliability can be defined as a measure of agreement between researchers who coding the related data (Kurasaki, 2000; Burla et al., 2008). As stated by Lombard et al. (2005) intercoder reliability is a crucial component of content analysis that must consider by researchers to avoid intersubjectivity. Undoubtedly, the foremost aims of this process are to minimize the subjective bias and enhance the credibility of the study results. According to Freelon (2010) two or more, trained coders or experts are required to make this assessment. In order to assess the agreement between coders or calculate the reliability, scholars proposed many methods to measure intercoder reliability scientifically such as;

percent agreement, Holsti’s method, Scott’s Pi, Cohen’s Kappa, and Krippendorf’s Alpha (Lombard et al., 2005; De Swert, 2012).

As being a widely used model; Cohen’s original kappa formula has been preferred by many scholars to reach intercoder reliability. According to Cohen (1960), the values of Kappa vary from 0 to 1. As classified by Everitt (1996) kappa values have divided into three as such; moderate, satisfactory, and perfect agreements. More precisely, if kappa values range between .41 and .60 this result refers to a moderate agreement. As second, if kappa values are ranked above .60 this is referred to as satisfactory or solid agreements. Finally, if kappa values are ranked above .80 this stands for nearly perfect agreements between the coders.

Step 1:

Development of Categorization

Matrix

Step 2: Data Coding

Step 3: Restesting the Existing Data

Figure 4.5. Steps of Assessing Intercoder Reliability Source: Burla et al. (2008)

According to Burla (2008), assessment process of intercoder reliability comprised of three different stages as demonstrated in Figure 4.5. First step starts with the development of the coding scheme which includes; the name of code, the definition of code, the example of text (unit of analysis), and rules of coding procedure. The initial version of the coding scheme is discussed firstly then coded independently by two experts in this step. At the end of this process, some modifications (exclusion of codes, reduction of codes, or assign of codes) are done by these experts if required.

Eventually, a final coding scheme is created for the assessment of intercoder reliability.

Within the second stage of this process, the assessment of intercoder reliability is done through the calculation of the coder’s agreement rates. Finally, a final review of codes and coding is performed within the third step of this process.