• Sonuç bulunamadı

CHAPTER 3: METHODOLOGY

3.7. Instruments and Data Collection Processes

3.7.1. Quantitative data set

3.7.1.1. Survey. Over the past two decades of investigation in LLSs, various ways have been used by the researchers to have insight about the mental processes used by learners as they work on a new language. Thus self-reporting has gained importance in LLS research, which has been made through various techniques (Chamot, 2014). As one of those techniques,

“student-completed, summative rating scales” (survey methods) have gained significant popularity in LLS research (Oxford & Burry-Stock, 1995) and are being used in a variety of studies to provide information about learners’ own learning processes.

By its nature, not all LLSs are observable (Chamot, 2004). A particular strategy as a mental procedure can also be paired with another learning strategy as a behaviour. For example, a learner can use selective attention (unobservable) with the purpose of

understanding, storing or retrieving information during a listening activity and may want to take notes (observable) in order to remember the information. Therefore, surveys are used as a way to represent a portrait of these strategies held by learners.

Surveys are cost-effective and provide a quick understanding of language learners’

strategy use (Oxford & Burry-Stock, 1995). Along with their ease of administration, they are very handy not only for indicating whether a particular strategy is used, but also for rating their frequency. However, it is possible that students may not figure out the intent of a question, thus, they may provide the answer according to what they perceive as the right answer (Chamot, 2014). They may also be not aware of which strategies they have used or claim to use strategies; in fact, they do not use (Chamot, 2004). In addition, another weak point of the survey is that it may not capture all the dimensions of learners’ strategy use which would allow deep insights into what they do (Gao, 2004). For this reason, it would be more valid to support surveys by qualitative and context-sensitive measurements to break over-reliance on them (Oxford 2003; Yamamori et al., 2003). In this study, as a first step to identify strategies “Children’s Speaking Strategy Use Survey” was developed to find out children’s strategy use in speaking skill.

3.7.1.1.1. Children’s speaking strategy use survey. After reviewing the relevant literature, the survey used in the present study was developed by the researcher and adapted from the three instruments in order to provide its suitability to the age and level of the participants. The modified versions included in the study were “Young Learners’ Language Strategy Use Survey” by Cohen and Oxford (2002); “Children’s Inventory for Language Learning Strategies - CHILLS” by Gürsoy (2003); “Taiwanese Children Strategy Inventory for Language Learning” by Lan and Oxford (2003). According to their own design these taxonomies investigate LLS use partially or wholly in different skills (listening, speaking, reading, and writing) and some other language features (vocabulary or translation).

Regarding the framework of this study, 23 items that would reflect LLSs used for speaking skill were identified. The 23 items were designed as a reduced three-point likert scale with short responses in the form of ‘yes, sometimes and no’ by taking into account

students’ cognitive and metacognitive development. As these children are still in the period of concrete operations, it might have been difficult for them to distinguish slight differences like usually and sometimes.

They were also translated into Turkish (children’s L1) with an attention to simplicity and comprehensibility issues regarding the age, language, and world knowledge of our

participants. The 23-item survey was given to 5 experts so that the content and face validity of the instrument could be tested based on their opinion. Experts were chosen from the English Language Teaching Department and examined the same questions individually by using content and face validity forms developed by the researcher.

The content validity of the items was provided by asking the experts to rate the appropriateness of each item as essential, useful but not essential or not necessary. After calculating the Content Validity Ratio (CVR) for individual test items by using Lawshe’s technique, the value for each item was compared to minimum values of CVR for significance at p=.05 level noted for the appropriate number of the experts. As the number of experts was 5, CVR value was required to be .99 at the α = .05 level of significance. Therefore, item 16 was combined with item 18 and then four items (item 3, 5, 10, 21) with CVR<.99 were removed from the instrument. Some structural modifications were provided for the items (2, 12, 16, 19, 23) as well. After calculation of the content validity index (CVI) for the whole instrument (Lawshe, 1975) the final version of the instrument had 18 items along with some changes or modifications in the instruction and the structure of the item statements in order to make them more comprehensible to the students (see Appendix 3). Then, it was given to a class of students (n=23) who were 5th graders in a different school to see whether there was a comprehensibility problem. All items were filled by the students one-by one with the

researcher’s presence to check the comprehensibility of the items. Only two students had a problem with the meaning of the words (“sounds” in item 2 and “pronunciation” in item 11)

because of their limited word knowledge even though they were presented in L1. Further explanations on these items were provided for these students and the case was taken into consideration for the actual application process. A reliability check for this present study was also calculated as .77. Besides, a demographic information part was included, involving questions about the students’ number, age, gender, grade, and their report card notes.

After obtaining official permission from Bursa Provincial Directorate of National Education (see Appendix 4), the survey was implemented at the beginning of the study to all 5th graders including experimental and control groups as pre-test application. The same survey was utilized at the end of the study again to all 5th graders as post-test application. The

purpose of pre- and post-test applications to all 5th graders was just to trace variations in their speaking strategy use across time. Before its application children were informed about the intent of the questions. They were told that it was not an exam and there were no right or wrong answers, so that they could report truthfully. They were also informed that all

information they provided would remain confidential. When they needed, unclear points were clarified by the researcher as well.

3.7.1.2. Observation. This technique is considered as one of the basic data sources for empirical research in that it provides direct and more objective information than second-hand self-report data. It is offered usually with two distinctions: “participant versus nonparticipant observation” and “structured versus unstructured observation” (Dörnyei, 2007). As a

supportive and supplementary data collection method, observation may play a role in complementing or putting in perspective the data obtained by other data collection tools. As suggested by Gürsoy (2010, p. 168) observations enable us to see “what children actually do while learning”. Therefore, observation techniques were used to provide a kind of support to see whether the reported data gathered from the survey was connected to the students' strategic behaviours occurring in a given context.

In this study, the students who completed the survey were observed after pre- and post-survey applications during their lessons. Observations were carried out by the

“participant observation” method that is the researcher became a full member of the group, being involved in the setting during all the teaching hours. The researcher spent enough time with the children in order to minimize one of the possible drawbacks of the observation process- the Hawthorne effect- to prevent children from altering their behaviours. However, it should be taken into consideration that observer effect cannot be eliminated completely as stated by Alder and Alder (1987).

Observations were taken place in the “structured” form with a specific focus, concerning with the frequency of speaking strategy use (Dörnyei, 2007). As an observation scheme, “Children’s Speaking Strategy Use Survey” was redesigned in the form of check list (see Appendix 5). 15 observable items were chosen and used to establish whether the

strategies reported by the children in the pre- and post-test surveys were used within the classroom environment. Under “event sampling” method, tally marks were entered by the researcher every time when the use of speaking strategy occurs in order to provide the total frequency of the strategy use (Dörnyei, 2007).

The observations were taken place in two phases before and after the intervention. The experimental group was observed for 11 classroom hours before the intervention and 11 hours following the intervention. Each observed lesson was audio-recorded and later transcribed to prevent loss of information during observation.

Regarding a serious concern with structured observation that highly structured protocols as quantitative measures are not seen sensitive to context specific information and

“may easily miss the insights that could be provided by participants themselves” as stated by Allwright and Bailey (1991 as cited in Dörnyei, 2007, p. 179), observations were combined

with another data collection form -field notes- in order to take a satisfactory view of observed phenomenon as recommended by Dörnyei (2007).