• Sonuç bulunamadı

View of Validity and Reliability of Mathematical Visual Literacy Skills Based on Avgerinou’s Vilual Literacy Index

N/A
N/A
Protected

Academic year: 2021

Share "View of Validity and Reliability of Mathematical Visual Literacy Skills Based on Avgerinou’s Vilual Literacy Index"

Copied!
20
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Validity and Reliability of Mathematical Visual Literacy Skills Based on Avgerinou’s

Vilual Literacy Index

Raja Lailatul Zuraida1*, Noraini Idris2, Haliza Abd Hamid3, Ruzela Tapsir4, Rosha Mohamed5, Farah Zahraa Salleh6, Nurakmal Ahmad Mustaffa7

1,6Universiti Pendidikan Sultan Idris, Malaysia 2National STEM Association, Malaysia 3,4,5Universiti Teknologi MARA, Malaysia 7Universiti Utara Malaysia

lailatul.zuraida@fsmt.upsi.edu.my*1

Article History: Received: 10 November 2020; Revised: 12 January 2021; Accepted: 27 January 2021; Published online: 05 April 2021

Abstract: There is much literature on visual literacy across different fields of knowledge. Even so, generally there is a gap of literature that deals with measuring mathematical visual literacy skills. The objective of this paper is to produce empirical data on reliability and validity of mathematical visual literacy skills instrument. The development of items was based on the skills outlined Avgerinou’s VL Index (2007. The early stage in validating the instrument required researchers to seek face validity and content validity from panels of experts. Face validity was based on subjective judgements of the items. Meanwhile, content validity was determined by Content Validity Index (CVI) which is computed using Item-CVI (I-CVI) and Scale-CVI (S-CVI). Each mathematical visual literacy skills had accepted S-CVI values ranging from 0.86 to 1.00 but items with low I-CVI values were deleted. Next, construct validity and reliability was determined by using Exploratory Factor Analysis (EFA) and Cronbach’s alpha respectively. The instrument, consisting of 43 items was assessed on 428 pre-university students. Students’ responses were scored using analytical rubric developed by researchers. Using Principal Component Axis (PCA) and varimax rotation, EFA was carried out where 40 retaining items were extracted to 7 factors, representing each visual literacy skills. Kaiser-Meyer-Olkin (KMO) of 0.721, significant Bartlett’s Test of Sphericity (BTS), communalities anti images ranging between 0.308-0.721 and 0.503-0.835 respectively, 7 extracted factors explaining 53.685% of the total variance, factor loadings of ±0.520 and more, and overall Cronbach’s alphas of instrument recorded at 0.82, explained the complete validity and reliability of the instrument.

Keywords: Mathematical visual literacy skills, visual literacy skills, rubric, exploratory factor analysis, analytical rubric

1. Introduction

In today’s world, we are flooded by visual messages through the internet and communication technologies as well as film and advertising industries (Avgerinou, 2009). The massive growth of the technological devices causes people worldwide to depend more on visual media than on textual media for the reception of information and this has since led education sectors to make changes in the transfer of knowledge, where visual materials are abundantly available. This also causes students in these days to incorporate more visual media in their projects and intellectual work, and they are encouraged to develop the skills needed to find, interpret, evaluate, use, and produce visual materials in a scholarly context (Hattwig, Bussert, Medaille, & Burgess, 2012). Research also suggests that the balance between images and words has shifted considerably calling new forms of literacy (Brumberger, 2011) and visual literacy seems to integrate both elements.

Most of the times, people get mistaken the ability of using technology in creating visual as visual literacy, however, a visually literate person must be able to perceive and analyze the actions, objects and symbols around him/her, whether natural or artificial, not depending solely on technology (Alpan, 2015). Yenawine (1997), defined visual literacy as the ability to discover meaning in visuals where it includes an arrangement of abilities extending from simple identification to complex interpretation on contextual, metaphoric, and philosophical levels. Numerous parts of cognition are called upon, for example, personal association, questioning, speculating, analysing, fact-finding, and categorizing. Visual literacy can also be defined as the ability to read, understand, evaluate and interpret information presented in visual images (Wileman, 1993; Bristor & Drake, 1994). Visual thinking, a skill of transforming information into any visual forms that helps in communicating with the information is associated with visual literacy (Wileman, 1993). This is agreed by Yeh and Lohr (2010), defining visual literacy simply as the learned knowledge and skills needed to precisely comprehend, interpret, analyze and create visual messages.

(2)

Over the years, visual literacy has been acknowledged in education sector due to the importance and benefits it has brought (Donaghy & Xerri, 2017). It has been proven that visual literacy benefitted a lot in teaching and learning. For example, in a case of astronomy course teaching, Tony Crider (2015) came to realization that astronomy visuals has been heavily emphasized on this course, hence, visual literacy skills are required. Later, he found that students’ exam performance in astronomy course improved when he revised his teaching materials by engaging visual information with the numeric data (Crider, 2015). In Bell’s study (2014) to compare learning approaches using traditional drawing activity and learning activity on a computer among biology students in a college, the group of students who learned through traditional drawing gained significantly higher grades than the other group in a quiz. This echo with studies by Lakoff & Johnson (1999) that haptic (exploratory movement) information is involved in shaping the brain’s cognitive structures. In other words, someone’s movement helps in shaping his/her way of thinking. Exposure on traditional drawing tasks among present- day college students was being emphasized since it is believed that this method helps to polish knowledge construction and integration (Van Meter & Garner, 2005). Rosken and Rolka (2006) stated that visualization, one of the most important ability in visual literacy skill, can be a powerful tool in exploring mathematical problems and to give meaning to mathematical concepts and the relationship between them during mathematics learning. Mastering mathematics skills has always been highlighted and it was said it needs to be improved from time to time in preparing skilful and well-equipped human resources, align with the rapid development and the needs of a developed country. With so many benefits visual literacy has portrayed in the past studies, even so, there is still lack of studies regarding assessment of visual literacy (Matusiak, Heinbach, Harper, & Bovee, 2019; Bowen, 2017). Hence, researchers think there is a need to develop an appropriate instrument to assess mathematical visual literacy skills, specifically among pre-university students in Malaysia.

In preparing Malaysian graduates which are well-educated and competent at global stage, their prior knowledge must be well polished starting at early level of education. At university level, mathematical knowledge and skills among university students always reflect on their basic knowledge acquired since pre-university level (Zulkifli, Norain, Norngainy, & Firdaus, 2015; Tan, Abang, & Farah, 2017). There are a lot of pre-university programmes offered in Malaysia, This includes Malaysian Matriculation certificate programme, Malaysian High School Certificate (STPM) programme, diploma courses and foundation studies. In a study on pre-university students’ preference and usage level of graphs in solving applied derivative problems (Haliza & Noraini, 2014), it was found that the students faced problems to provide visual reasoning in solving mathematical solutions. Besides, a study by Zuraida, Noraini, Haliza, Ruzela, Rosha and Farah Zahraa (2020) discovered that engagement of visuals and visual literacy skills in Malaysian pre-university examination questions was still lacking. From all the sets being analyzed, only 15.5% of the questions tested on visual literacy skills, but the remaining 84.5% of the questions did not test any of the visual literacy skills. The visual literacy skills tested was based on Avgerinou’s Visual Literacy Index developed in 2007. The skills and their definitions outlined by Avgerinou (2007) can be found in Table 1.

Table 1. Visual Literacy Skills Definitions by Avgerinou (2007) Visual Literacy

Skills Definitions

Construct Sub-construct

Visual Information

none The abilities to have knowledge of visual vocabulary, knowledge of visual conventions and the knowledge of definitions of the mathematical signs and symbols. A person is expected to be able to recognize the main feature of visual representation of mathematical terminology, able to demonstrate his/her knowledge and apply their understanding of/on the meaning of mathematical notations, signs and symbols through agreement with the given definitions.

(3)

Intellectual Skill

Discrimination The abilities to perform visual discriminations and reconstruction, visual reasoning, to have knowledge of visual conventions and associations. A person is expected to be able to recognize differences in shapes between/among two dimensional visual mathematical representations and be able to read, discriminate and group, and interpret the information displayed in the visual mathematical representations.

Concrete Concept

The abilities in understanding and applying the knowledge of design principle and their use and besides having the critical viewing.

Defined Concept The abilities to have knowledge and understanding of meaning of mathematical signs or symbols and capabilities in critical viewing. A person with this skill should have the ability to use mathematical definitions to classify the examples of signs and symbols (or both) given.

Rule The abilities to mentally trace constituting elements of a visual representation and decide on their associations, to interpret visual representations of data as being correct representations of the given information, and to identify, interpret and compare the visual information displayed.

Higher Order

Rule

The abilities to do visual reasoning, visualization and constructing meaning of abstract mathematical concepts. A person is expected to be able to visualize, reason and reconstruct and produce a concrete-abstract continuum of a given visual representations and to be able to determine the reconstruction of the element of an object on the evidence of elements on other visible surfaces of the same object.

Cognitive Strategy

none The abilities to perform observation, visualization and visual thinking. With this skill, a person is expected to be able to mentally combine or remove, one at a time, components of mathematical visual representations to expose the abstract concepts and to be able to visualize and perform certain procedure following textual instructions.

Zuraida et al., (2020) further studied mathematical visual literacy skills acquired among pre-university students by giving a set of questions containing each of visual literacy skills outlined in Avgerinou’s Visual Literacy Index (2007). The first assessment contained subjective questions. Findings found that the students performed very poorly in the assessment. Hence, researchers modified the format of the questions to objective format in the hope that the students would be encouraged to answer the questions better. Results showed that there were improvements in terms of percentage for each skill, however, overall findings still illustrated poor performances. Therefore, a more in-depth study should be done to strengthen this study by preparing a more appropriate assessment containing more items testing all the mathematical visual literacy skills (Zuraida et al., 2020).

In order to further study mathematical visual literacy skills among Malaysian pre- university students, an instrument that is valid and reliable should be established. Researchers developed the items using Avgerinou’s Visual Literacy Index (2007) in objective format. There are a few terms usually used referring to objective items which are multiple-choice items, closed-ended items or selected-response items. This format is widely used for educational assessment purposes (Gierl, Bulut, Guo, & Zhang, 2017). According to Downing (2006), if the intention of an examination is to measure a wide range of abilities, knowledge, or cognitive achievement, especially higher order cognitive skills, such as problem solving, synthesis, and evaluation, it is best to employ multiple-choice items. Compared to subjective items, respondents usually spend less time in answering objective

(4)

questions (Haladyna, 2004; Liu & Jansen, 2015). It is more convenient to administer and allow a broad range of content domain to be tested within a short time period (Haladyna & Rodriguez, 2013; Rodriguez, 2016). An objective item is made up of the stem, the options, and any auxiliary information (Gierl et al., 2017). The stem includes context, content, and/or the question that need to be answered. Meanwhile, the options consists alternative answers, including one correct option (key) and some incorrect options (distractors) (Bhasah, 2009). The researchers expect to be receiving one or more correct answers, instead of responses based on the respondents’ personal experience with the use of objective items in assessment (Liu & Jansen, 2015). The responses can be scored objectively (Haladyna & Rodriguez, 2013; Rodriguez, 2016) and gives chance to the assessment developers to analyze the responses quicker (Scully, 2017).

In order to analyze students’ responses, researchers use scoring rubric as appropriate guideline. Rubrics acts as an effective approach to provide a reliable and valid professional judgment in assessing students’ performance (Pellegrino, Baxter, & Glaser, 1999). In an assessment, students’ performances can be scored using holistic scoring or analytical scoring

(Bhasah, 2009). These two scoring methods work differently in a way that is for holistic scoring, a student’s work is given a score as a complete unit, representing the entire process or product (Nitko, 2001). On the other hand, a score is assigned to each element and the scores will be combined to form an overall score through analytical scoring (Petkov & Petkova, 2006). As advantage, students' strengths and weaknesses in different skills or abilities can be explained in detail (Reddy, 2011).

In a study conducted by Moskal and Leydens (2000), they highlighted the importance of validity and reliability in designing scoring rubrics. In order to support the validity of an assessment instrument, three types of evidence are normally examined which are content- related evidence, construct-related evidence, and criterion-related evidence. These different types of evidences also reflect on the purposes and objectives of the assessment. Content- related evidence alludes to the degree to which a student's responses to a given assessment mirrors that student's knowledge of the content area being investigated. It is usually used when the expectation of an assessment is to evoke evidence of an individual's knowledge within a given content area such as historical facts. In the event that the assessment instrument is intended to gauge reasoning, problem solving or other processes that are internal to an individual and, therefore more indirect examination is required, then the appropriateness of the construct-related evidence should be examined. Meanwhile, criterion-related evidence supports the extent to which the results of an assessment shows correlation between current event or future event. It can be used when the assessment instrument is to elicit evidence of how a student will perform outside of school or in a different situation.

On the other hand, reliability in rubric designation concern a lot on consistency of scores given in an assessment (Moskal and Leydens, 2000). The inconsistencies throughout scoring process are due to influences that are internal to the rater rather than differences in students’ performances. Hence, good scoring rubrics overcome this concern when the scoring rubrics becomes a guideline throughout scoring process so that consistency is kept up. It is advised to ensure that the scoring categories to be well defined, the differences between the score categories are clear and the two independent raters should arrive the similar score for a given response based on the scoring rubric. If one of these conditions are not satisfied, revision need to be done to the unclear score categories.

Additionally, Baryla, Shelley, & Trainor (2012) used factor analysis to identify number of criteria in their study on oral communication assessessment. The criteria were measured byusing analytical rubric in which each of it is scored 0, 1, or 2, representing “does not meet”, “meets”, or “exceeds expectations” respectively. There were 49 criteria initially listed, however, researchers find that it is possible for redundancy to happen whereby some criteria might reflect the same way. Hence, researchers use factor analysis to reduce the number of criteria using factor analysis since it is a statistical method that enables reduction of a large number of variables to a smaller more manageable number by identifying the number of unique underlying criteria, also called as factor (Baryla et al., 2012). They suggested that learning institutions should consider conducting factor analysis as part of the process of continually improving their assessment instruments since this procedure promotes to evaluate an assessment program easier, design rubrics more efficiently and lessen the burden on faculty in conducting the assessment. In the meanwhile, Mertler (2000), proposed a step-by-step guideline in rubric designation (Figure 1).

(5)

Step 2: Identify specific observable attributes that you want to see (as well as those you don’t want to see) your students demonstrate in their product, process, or performance.

Step 3: Brainstorm characteristics that describe each attribute.

Step 4: Write thorough narrative descriptions for excellent work and poor work for each individual attribute.

Step 5: Complete the rubric by describing other levels on the continuum that ranges from excellent to poor work for each attribute.

Step 6: Collect samples of student work that exemplify each level.

Step 7: Revise the rubric, as necessary.

Figure 1. Steps in Mertler’s Analytical Rubric Scoring Designation 2. Objectives

The objective of this study are:

i.

To develop mathematical visual literacy skills items using objective format for Malaysian pre-university students based on Avgerinou’s Visual Literacy Index.

ii. To prepare an appropriate analytical scoring rubric as guideline in scoring students’ responses.

iii.

To find the validity of the instrument through content validity using Content Validity Index (CVI) and construct validity using Exploratory Factor Analysis (EFA)

iv.

To find the reliability of the instrument using Cronbach’s alpha. 3. Methods

Throughout designing rubric scoring for mathematical visual literacy skills items in this study, researchers used analytical rubric design proposed by Mertler (2000) as shown in Figure 1. The first step in any rubric design is identifying the purpose of the study, just as suggested in Step 1 of Mertler’s rubric design. In this study, researchers were interested to study mathematical visual literacy skills among pre-university students in Malaysia, hence, an appropriate assessment tool need to be developed. Next, attributes to be measured in the study were explained in Step 2. In order to develop mathematical visual literacy skills items, researchers considered two important elements which were students' skills in solving mathematical visual literacy problems and their explanations. The ideas were further brainstormed in detailed in Step 3. For each question, a score will be given based on both attributes: the answer given together with the explanation based on the visuals involved.

In Step 4, a thorough narrative descriptions from poor work to excellent work for each attribute was proposed. The rubric scores function the same way as ordinal scale. Rubrics were classified based on the answers and explanations given, with correct answer prioritized over the explanation. To fulfill Step 5 in rubric design, a complete rubric need to be proposed. Table 2 shows a complete rubric scoring used in this study whereby researchers adapted analytical scoring rubric developed by Zuraida et al. (2020).

Table 2. Scoring Rubric and descriptions Rubric Scores Description

1 Omitted

2 Incorrect answer and unreasonable explanation

3 Incorrect answer and reasonable explanation

(6)

Once the rubric was prepared, researchers were ready to prepare the instrument consisting mathematical visual literacy items. All the items were developed based on Visual Literacy Skills outlined in Avgerinou’s Visual Literacy Index (Avgerinou, 2007). There were 7 skills to be tested in the instrument containing a total of 47 objective items. Development of the number of items was based on the suggestion by past research since factor analysis will be used in this study. It was suggested that the number of items per factor (also referring to the construct in factor analysis) should vary from three to five (MacCallum, Widaman, Zhang, & Hong, 1999; Fabrigar, Wegener, MacCallum, & Strahan, 1999;

Raubenheimer, 2004). Hence, a total of 5 to 7 number of items were developed representing each construct of visual literacy skills, satisfying the minimum requirement. Few more items were also added to every construct/sub-construct for back up in the case of items removal which usually happens throughout procedures in seeking validity and reliability.

Next, researchers ccollected samples of students’ works, exemplifying every level in the rubric (Step 6). In order to justify the quality of the instrument before data collection, it is important to seek its validity and reliability. According to Field (2005), validity alludes to the degree to which the research instrument can measure the constructs being studied precisely and measure what it ought to measure. It also refers to integrated judgement made in evaluating empirical evidence and theoretical rationales which supports the suitability and adequacy of interpretations and actions through test scores or different types of assessments (Messick, 1992). Meanwhile, reliability refers to the degree of consistency of the instrument to measure what to be measured refers to the reliability of the instrument (Ary, Jacobs, Razavieh, & Sorensen, 2006). It also implies that the answers or score results given are approximately similar when the exact item is tested a few times on the same subject at different time intervals (Wainer & Braun, 1988).

In this study, researchers use face validity, content validity and construct validity in validating the instrument. Researchers sent out the instrument and validation form to four panels of experts, all with more than ten years of experience as mathematics lecturers. The experts gave feedbacks for face validity by which general comments and suggestions were made based on instrument’s appearance in terms of consistency of style and formatting, legibility, feasibility and the clarity of the language (Taherdoost, 2016). Next, researchers quantify the degree of agreement met among the experts to identify the content validity of the instrument. Content validity, as described by Waltz, Strickland and Lenz (2005), focus on deciding whether the items in the instrument satisfy the domain of content to be studied. For this purpose, the CVI was calculated for all individual items, abbreviated as I-CVI and the overall scale, which is S-CVI. In order to rate each item’s relevancy for every construct it represents, experts used a 4-point scale; 1 = not relevant, 2 = somewhat relevant, 3 = quite relevant, 4 = highly relevant. Each item rated as 3 or 4 is considered to have met the agreement of an expert (Shrotryia & Dhanda, 2019). For the number of judges (experts) lesser or equals to five, the CVI should be 1.00, in contrary to that, the minimum I-CVI value of 0.78 is accepted for the number of judges of six or more (Polit & Beck, 2006; Shrotryia & Dhanda, 2019). Next, the average value of I-CVI is computed to find the content validity of overall scale, S-CVI. A minimum S-CVI value of 0.8 reflects the content validity of an instrument.

(Polit & Beck, 2006). Table 3 through Table 9 shows I-CVI and S-CVI values computed based on experts evaluation.

Table 3. Experts Evaluation on Visual Information Skill Items

Item Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

2(a) X X X X 4 1.00 2(b) X X X X 4 1.00 3(a) X X X X 4 1.00 16(b) X X X X 4 1.00 20(a) X X X X 4 1.00 20(c) X X X X 4 1.00 29 X X X X 4 1.00 S-CVI 1.00

(7)

Item Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI 1(b) X X X X 4 1.00 3(b) X X X X 4 1.00 12(a) X X X X 4 1.00 19(a) X X X X 4 1.00 20(b) X X X X 4 1.00 24(a) X X X X 4 1.00 25(b) X X X X 4 1.00 S-CVI 1.00

Table 5. Experts Evaluation on Concrete Concept Skill Items

Item Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

1(a) X X X X 4 1.00 4(b) X X X X 4 1.00 17 X X X X 4 1.00 21 X X X X 4 1.00 22 X X X X 4 1.00 24(b) X - - X 2 0.50 S-CVI 0.92

Table 6. Experts Evaluation on Defined Concept Skill Items

Item Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

5(a) X X X X 4 1.00 5(b) X X X X 4 1.00 6(a) X X X X 4 1.00 6(b) X - X X 3 0.75 13 X X X X 4 1.00 16(a) X X X X 4 1.00 19(b) X X X X 4 1.00 S-CVI 0.96

Table 7. Experts Evaluation on Rule Skill Items Item

Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

6(c) - - X - 1 0.25 9 X X X X 4 1.00 10(a) X X X X 4 1.00 10(b) X X X X 4 1.00 11 X X X X 4 1.00 12(b) X X X X 4 1.00

(8)

23 X X X X 4 1.00

S-CVI 0.89

Table 8. Experts Evaluation on Higher Order Rule Skill Items Item

Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

4(a) X X X X 4 1.00 8 X X X X 4 1.00 15 X X X X 4 1.00 25(a) X X X X 4 1.00 27 X X X X 4 1.00 28 X X X X 4 1.00 S-CVI 1.00

Table 9. Experts Evaluation on Cognitive Strategy Skill Items

Item Expert 1 Expert 2 Expert 3 Expert 4 No. in agreement I-CVI

7 X X X X 4 1.00 14 X X X X 4 1.00 18 X X X X 4 1.00 24(c) - - - - 0 0.00 26(a) X X X X 4 1.00 26(b) X X X X 4 1.00 26(c) X X X X 4 1.00 S-CVI 0.86

Based on Table 3 through Table 9, all the S-CVI values were more than 0.80 and in the range of 0.86 to 1.00. However, the values of I-CVI suggested that some items should be removed. Some of the remaining items still needed to be revised based and experts’ suggestions and comments. Table 10 shows the items removed from the instrument.

Table 10. Deleted Items Due To Low I-CVI

Item I-CVI Visual Literacy Skill

6(b) Defined Concept 0.75

6(c) Rule 0.25

24(b) Concrete Concept 0.50

24(c) Cognitive Strategy 0.00

Total Items 4

After removing 4 items, only 43 items retained out of 47 items initially developed. Next, the instrument was administered to seek the reliability and construct validity of the instrument. There were 428 Malaysian pre-university students participated in this study. They are currently attending diploma programme, matriculation certificate programme, foundation studies and STPM certificate programme, all majoring in science. Since factor analysis was used to identify the construct validity, it is important to ensure that the sample size is enough for it to be conducted (Coakes, Steed, & Ong, 2009). For a variable (which also refers to item in the instrument), it is equivalent to 10 samples which is in the ratio of 1:10 (Hair, Black, Babin, Anderson, & Tatham, 2006; Nunnally, 1978). Meanwhile, Pallant (2011) stated that a sample size of 150 or at least in the ratio of 5 samples for 1 variable is adequate to conduct factor analysis. This makes a sample size ranging from 150 to 430 is considered

(9)

enough for factor analysis in this study. The construct validity and reliability of the instrument can be obtained based on analyzed scores given to the items in the instrument. Since researchers intended to evaluate students’ performances across different visual literacy skills, analytical rubric was used. However, while marking students’ responses, researchers found some new cases whereby the responses were not listed in any category of the scoring rubric. This urged the researchers to revise the scoring rubric. According to Mertler (2000), Step 7 in his rubric design recommended revision need to be done to the rubric if necessary. Table 11 shows some cases found in students’ responses.

(10)

Table 11. Sample of Students’ Responses

Item Sample of Students’ Work

2a

Question 2a is ought to explore students’ Visual Information skill. The students were expected to have the ability in recognizing the main feature of visual representation of mathematical terminology, such as reciprocal, inverse and reflections and apply their understanding in the into graphs. A student’s sample work showed that the student gave wrong explanation. He/she might have poor knowledge on the mathematical signs and symbols, hence he/she cannot answer when asked to choose which graph will represent the function 𝑦 = 1 . The response can be categorized as 𝑓(𝑥)

no answer and wrong explanation. 15

In Question 15, students’ Higher Order Rule skill was tested. The students were expected to be able to visualize, reason and reconstruct and produce a concrete- abstract based on the mathematical problem statements. Students should be able to illustrate the given information for the points as well as mathematical

(11)

concepts such as parallel and perpendicular lines by drawing it. If the students are able to draw and use mathematical knowledge correctly, students will be able to find the width of the road which is CD. The sample of student’s work shows that the student was able to draw correctly, hence he/she is expected to have better explanation in finding the width of the road. However the student wasn’t able to find the width of the road. The response can be categorized as no answer and correct explanation.

20a

In this question, students were tested on their Defined Concept skill. Students were expected to have the ability to use mathematical definitions of statistical terminology and classify them in the graph. However, this sample of student’s work shows that he/she cannot put together the definitions of the statistical terminology and gave wrong answers in classifying these terms in the graph given. The response can be categorized as wrong answer and no explanation.

(12)

17

Students’ Concrete Concept skill was tested in Question 17. This question tested students' ability to understand the problem given and apply their knowledge on geometrical shapes they have learned in critical ways. This question expected the students to make explanations by making connection the relationship of 3- dimensional shapes with 2-dimensional nets. This student’s sample work shows that the student failed to state the materials needed to construct the cylinder. With the knowledge on the nets making up the cylinder, it could be used to explain problem solution in finding the shortest distance from A to B. Even so, the student can simply answer the question correctly. The response can be categorized as correct answer and no explanation.

After studying variety of students’ responses, researchers modified the rubric by adding some new cases to the existing rubric. The new rubric can be found in Table 12.

Table 12. New Scoring Rubric and Descriptions Rubric Scores Description

0 Omitted

1 No answer and unreasonable explanation

2 No answer and reasonable explanation

3 Incorrect answer and no explanation

4 Incorrect answer and unreasonable explanation

5 Incorrect answer and reasonable explanation

6 Correct answer and no explanation

(13)

8 Correct answer and reasonable explanation

By using the new rubric, students’ responses were re-coded and analyzed to find the construct validity and reliability of the instrument. Statistical Package for the Social Sciences (SPSS) software version 23 was used in analysing the data. Table 13 shows the summary of data analysis employed in this study.

Table 13. Summary of Data Analysis Methods

Purpose Statistical Measures

Construct Validity Factor Analysis: Exploratory Factor

Analysis (EFA)

Reliability Cronbach’s Alpha

4. Findings and Discussion

In any study of newly developed instrument where there is lack of prior knowledge about the existence of underlying pattern in the data, exploratory factor analysis (EFA) can be utilized (Cater & Machtmes, 2008). It functions by identifying, sorting and reducing large number of items into the set of factors (Hair, Black, Babin, & Anderson, 2010a; Chua, 2014; Maskey, Fei, & Nguyen, 2018). Next, EFA enables dismissal of unwanted and unrelated items based on low factor loadings. Before conducting EFA, it is important to examine the accuracy of data entry, missing values, normality test and outliers. With accuracy of data entry and missing values confirmed through data screening in SPSS, normality test can be proceeded. Normality test can be done by examining skewness and kurtosis statistic. Items are said to be normally distributed if the values of skewness and kurtosis are within the range of ±2 (Garson, 2012). Table 14 shows the result of normality test conducted.

Table 14. Normality Test Descriptive Statistics

N Statistic Skewness Statistic Std. Error Kurtosis Statistic Std. Error

1a 428 -.651 .118 -.244 .235 1b 428 -.629 .118 -.679 .235 2a 428 -.568 .118 -.848 .235 2b 428 .852 .118 .256 .235 3a 428 .689 .118 -.504 .235 3b 428 -.404 .118 -.988 .235 4a 428 -.652 .118 -.581 .235 4b 428 -.638 .118 -.490 .235 5a 428 -.356 .118 -.395 .235 5b 428 -.344 .118 -.536 .235 6a 428 .649 .118 3.545 .235 7 428 -1.134 .118 .135 .235 8 428 -.678 .118 -.475 .235 9 428 .038 .118 -.591 .235 10a 428 .003 .118 -.556 .235 10b 428 .079 .118 -.678 .235 11 428 .029 .118 -.741 .235 12a 428 -.428 .118 -.880 .235 12b 428 .109 .118 -.445 .235 13 428 -.652 .118 -.697 .235 14 428 -.350 .118 -.995 .235 15 428 -.466 .118 -.946 .235 16a 428 -.274 .118 -.620 .235 16b 428 .974 .118 .348 .235 17 428 -.697 .118 -.445 .235 18 428 -.776 .118 .196 .235 19a 428 -.471 .118 -.846 .235 19b 428 -.275 .118 -.565 .235 20a 428 -.123 .118 -.418 .235 20b 428 -.570 .118 -.801 .235

(14)

20c 428 .762 .118 .142 .235 21 428 .815 .118 -.196 .235 22 428 -.777 .118 -.225 .235 23 428 -.114 .118 -1.110 .235 24a 428 -.730 .118 -.368 .235 25a 428 -.584 .118 -.456 .235 25b 428 .098 .118 1.032 .235 26a 428 -.836 .118 .270 .235 26b 428 -.501 .118 -.636 .235 26c 428 -.562 .118 -.536 .235 27 428 -.493 .118 -.792 .235 28 428 -.514 .118 -.889 .235 29 428 .474 .118 -.861 .235 Valid N (listwise) 428

Based on Table 14, it is found that item 6a had kurtosis value violated the condition of a normal data. Any non-normal item found should be removed but this might affect the number of items representing respective construct. Another way to avoid item removal is by identifying outliers in the data set, which is data with values that appear either too high or too low compared to other data (Bluman, 2009). This can be done by examining z-scores of data produced in SPSS whereby detected outliers have z-scores that are not within the range of ±4 (Hair, Black, Babin, & Anderson, 2010b). Examination of z-scores did not find any possible outlier in the data set. Unfortunately, item 6a need to be removed from the data set. With the normality of data confirmed, EFA can be proceeded with the remaining of 42 items. There are a few aspects to be looked into in determining the suitability of the data for EFA which are the sample size, factorability of the correlation matrix, Kaiser-Meyer-Olkin Measure and Sampling Adequacy (KMO) and Bartlett’s Test of Sphericity (Coakes et al., 2009). In this study, EFA was conducted using Principal Component Analysis (PCA) and varimax rotation functions in SPSS. The number of factors were fixed and extracted to 7 based on the number of visual literacy skills outlined. Factorability of the correlation matrix is assumed when KMO value obtained is more than 0.6 and Bartlett’s Test is significant at 𝛼 < .05 (Hair et al., 2010a; Tabachnick & Fidell, 2007). Table 15 shows the results from KMO and Bartlett’s Test.

Table 15. KMO and Bartlett’s Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. 0.721

Bartlett's Test of Sphericity Approx. Chi-Square 7765.778

df 780

Sig. 0.000

Result of KMO and Bartlett’s Test proves that factorability of the correlation matrix is assumed. Apart from this, examination of communalities, anti-image and factor loadings also need to be looked into. According to Tabachnick & Fidell (2007), accepted communalities values for all items must be more than 0.3, if not, the items need to be removed. Low communalities were identified for items 23 and 25b with values of 0.108 and 0.061 respectively. Steps in dealing items with low communalities are quite tedious since it should be done one by one, starting with item with lowest communality value. After removing item 25b, communality value of item 23 improved from 0.108 to 0.112 but it is still lesser than 0.30,

Thus, its removal cannot be avoided. Next, anti-image for all items were observed and only items with accepted value greater than 0.5 should be considered (Hair et al., 2010b). From the analysis, all the anti-image values ranges from 0.503 to 0.835 and exceeds the minimum value required. Another important step in conducting EFA is to ensure that the total variance explained in factor analysis is sufficient even the number of factors were fixed at 7. Hair et al. (2006) suggested total variance explained must be 60% or more, meanwhile Streiner (1994) suggested 50% would be enough. Table 16 shows the summary of total variance explained in EFA based on 7 factors extracted.

(15)

Table 16. Summary of Total Variance Explained Component 1 2 3 4 5 6 7 Total 5.093 3.538 3.223 2.825 2.562 2.323 1.911 Initial Eigenvalues % of Variance 12.733 8.845 8.058 7.062 6.404 5.808 4.777 Cumulatice 12.733 21.578 29.635 36.697 43.101 48.909 53.686 Total Extraction % Total 5.093 3.538 3.223 2.825 2.562 2.323 1.911 Variance Explained Sums of Squared Loadings % of Variance Cumulatice 12.733 12.733 8.845 21.578 8.058 29.635 7.062 36.697 6.404 43.101 5.808 48.909 4.777 53.686 % Rotation Total 3.662 3.589 3.155 3.047 3.023 2.863 2.136 Sums of Squared Loadings % of Variance Cumulatice 9.156 9.156 8.972 18.128 7.888 26.016 7.616 33.632 7.557 41.189 7.159 48.347 5.339 53.686 %

It was found that all the 7 factors extracted explained 53.685% of total variance. Next, in Rotated Component Matrix, factor loadings of each item must be observed. Factor loadings greater than ±0.30 were used, representing the minimal level of factor loading values (Hair, Anderson, Tatham, & Black, 1998). Rotated Component Matrix categorized all the remaining 40 items into 7 components as shown in Table 17.

Table 17. Components in Rotated Component Matrix Rotate components matrix Component 1 2 3 4 5 6 7 4a .834 27 .804 28 .793 25a .777 8 .700 15 .609 2b .783 3a .771 20c .756 29 .755 16b .637 2a .571 21 .520 20b .837 12a .779 3b .763 1b .758 19a .734 26a .767 18 .724 7 .668 26c .616 14 .603 26b .595 20a .834 5a .815 5b .751

(16)

16a .673 19b .637 13 .798 24a .788 22 .755 17 .672 4b .483 1a .325 .439 12b .709 10a .672 10b .651 9 .585 11 .540

Table 18 shows the summary of categorization of items which were then decided by comparing to the definitions.

Table 18. Categorization of Items Based on EFA and Definitions Rotate Component Matrix

Components Higher Order Rule Visual Information Discrimination Cognitive Strategy Defined Concept Concrete Concept Rule 4a .834 27 .804 28 .793 25a .777 8 .700 15 .609 2b .783 3a .771 20c .756 29 .755 16b .637 2a .571 21 .520 20b .837 12a .779 3b .763 1b .758 19a .734 26a .767 18 .724 7 .668 26c .616 14 .603 26b .595 20a .834 5a .815 5b .751

(17)

16a .673 19b .637 13 .798 24a .788 22 .755 17 .672 4b .483 1a .439 12b .709 10a .672 10b .651 9 .585 11 .540

In Table 17, it was found that item 1a had cross-loadings, meaning that it was categorized in more than 1 component. Hence, researcher need to reconsider its categorization by using definitions. Initially, the item was developed for Concrete Concept skill. Then, item 1a is maintained in its original construct (shown in Table 18). Meanwhile, item 24a fall into Concrete Concept skill construct. This is what is always expected to happen in EFA items’ categorization. Hence, item 24 is categorized in Concrete Concept skill instead of Discrimination skill. Finally, results from EFA allowed identification of the reliability using Cronbach Alpha. Reliability analysis results for each construct can be referred in Table 19.

Table 19. Reliability Analysis

Visual Literacy Skills Cronbach’s alpha

Visuual Information .816

Discrimination .843

Concrete Concept .752

Defined Concept .817

Rule .653

Higher Order Rule .857

All Items .802

Table 20 shows all the Cronbach alpha values for individual construct exceeds the accepted range, which is 0.6 and more (George & Mallery, 2001). Table 20 summarizes the total number of variables or items from initially developed until the last stage of data analysis.

Table 20. Scoring Rubric and Descriptions

Visual Literacy Skills No. of items

Construct Sub- Initial Experts Normality Factor Reliability

construct Stage Validation Test Analysis Analysis

Visual none 7 7 7 7 7 Information Discrimination 7 7 7 5 5 Intellectual Concrete 6 5 5 6 6 Skills Concept Defined 7 6 5 5 5 Concept Rule 7 6 6 5 5

(18)

Higher Order 6 6 6 6 6 Rule

Cognitive none 7 6 6 6 6

Strategy

5. Conclusion

In developing an instrument, the format of items should be made clear so that the objective of the study could be achieved. In addition, it is important to find the most suitable scoring rubric to avoid any inconsistencies throughout scoring process. Next, the validity and reliability can be identified based on the scores given. Construct validity, carried out through Exploratory Factor Analysis is another important step to find the underlying constructs in the data set. Using the categorization of items proposed in EFA, reliability analysis can be done. With the construct validated through EFA, another study can be carried out to find the convergent validity and discriminant validity through Confirmatory Factor Analysis (CFA).

6. Acknowledgement

We would like to express our sincere thanks and appreciation to the Ministry of Education (MOE) and Research Management and Innovation Centre, Universiti Pendidikan Sultan Idris (Malaysia) for the financial supports through FRGS Vote No: 2017-0071-107-02.

References

1. Alpan, G. B. (2015). The Reflections of Visual Literacy Training in Pre-Service Teachers’ Perceptions and Instructional Materials Design. Journal of Education and Human Development, 4(2(1)), 143-157. 2. Ary, D., Jacobs, L. C., Razavieh, A., & Sorensen, C. (2006). Introduction to Research in Education (7th

ed.). Canada: Thomson Wadsworth.

3. Avgerinou, M. D. (2007). Towards a Visual Literacy Index. Journal of Visual Literacy (27), 29-46. 4. Avgerinou, M. D. (2009). Re-Viewing Visual Literacy in the "Bain d' Images" Era.Tech Trends, 53(2),

28-34.

5. Baryla, E., Shelley, G., & Trainor, W. (2012). Transforming Rubrics Using Factor Analysis. Practical Assessment, Research & Evaluation, 17(24), 1-7.

6. Bell, J. C. (2014). Visual Literacy Skills of Students in College-Level Biology: Learning Outcomes following Digital or Hand-Drawing Activities. The Canadian Journal for the Scholarship of Teaching and Learning, 1-16.

7. Bluman, A. G. (2009). Elementary Statistics. New York: McGraw-Hill.

8. Bowen, T. (2017). Assessing visual literacy: a case study of developing a rubric for identifying and applying criteria to undergraduate student learning. Teacher in Higher Education, 22(6), 705-719. 9. Brumberger, E. (2011). Visual literacy and the digital native: An examination of the millennial learner.

Journal of Visual Literacy, 30, 19-46.

10. Cater, M., & Machtmes, K. (2008). Informed Decision-Making in Exploratory Factor Analysis. Journal of Youth Development, 3(3), 170-177.

11. Chua, Y. P. (2014). Statistik Penyelidikan Lanjutan: Ujian Regresi, Analisis Faktor dan analisis Sem. Malaysia: McGraw Hill (Malaysia) Sdn Bhd.

12. Coakes, S. J., Steed, L., & Ong, C. (2009). SPSS Analysis without Anguish: Version 16.0 for Windows. Australia: John Wiley & Sons.

13. Crider, A. (2015). Teaching Visual Literacy in the Astronomy Classroom. New Directions for Teaching and Learning, 7-18.

14. Donaghy, K., & Xerri, D. (2017). The image in ELT: an introduction. In K. Donaghy, & D. Xerri (Eds.), The Image in English Language Teaching (pp. 1-11). Malta: ELT Council.

15. Downing, S. M. (2006). Selected-Response Item Formats in Test Development. In S. M. Downing & T. Haladyna (Eds.), Handbook of Test Development (pp. 287-302). Mahwah, NJ: Erlbaum.

16. Fabrigar, L., Wegener, D., MacCallum, R., & Strahan, E. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods,, 4(3), 272-299.

17. Field, A. (2005). Discovering Statistics Using SPSS (Second ed.). London: SAGE Publications. Garson, G. D. (2012). Testing Statistical Assumptions. Asheboro, NC: Statistical Associates Publishing.

18. George, D., & Mallery, P. (2001). SPSS for Windows step by step: A simple guide and reference 10.0 update (Third ed.). Toronto, Canada: Allyn and Bacon.

19. Gierl, M. J., Bulut, O., Guo, Q., & Zhang, X. (2017). Developing, Analyzing, and Using Distractors for Multiple-Choice Tests in Education: A Comprehensive Review. Review of Educational Research,

(19)

87(6), 1082-1116.

20. Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis (Fifth ed.). Upper Saddle River, NJ: Prentice Hall.

21. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010a). Multivariate Data Analysis (Seventh ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

22. Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010b). Multivariate Data Analysis: A Global Perspective (Vol. Seventh Edition). Upper Saddle River, NJ: Pearson Education, Inc.

23. Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate Data Analysis (Sixth ed.). New Jersey: Prentice-Hall.

24. Haladyna, T. M. (2004). Developing and Validating Multiple-Choice Test Items. Mahwah, NJ: Lawrence Erlbaum.

25. Haladyna, T. M., & Rodriguez, M. C. (2013). Developing and Validating Test Items. New York: Routledge.

26. Haliza, A. H., & Noraini, I. (2014). Assessing Pre-University Students' Visual Reasoning: A Graphical Approach. International Journal of Assessment and Evaluation in Education, 4, 24-39.

27. Hattwig, D., Bussert, K., Medaille, A., & Burgess, J. (2012). Visual Literacy Standards in Higher Education: New Opportunities for Libraries and Student Learning. Libraries and the Academy, 13(1), 61-89.

28. Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic Books.

29. Liu, Z., & Jansen, B. J. (2015). Subjective versus Objective Questions: Perception of Question Subjectivity in Social Q&A. Social Computing, Behavioral-Cultural Modeling, and Prediction 8th International Conference, SBP, (pp. 131-140). Washington D. C., USA.

30. MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample Size in Factor Analysis. Psychological Methods, 4(1), 84-99.

31. Maskey, R., Fei, J., & Nguyen, H.-O. (2018). Use of Exploratory Factor Analysis in Maritime Research. The Asian Journal of Shipping and Logistics, 34(2), 91-111.

32. Matusiak, K. K., Heinbach, C., Harper, A., & Bovee, M. (2019). Visual Literacy in Practice: Use of Images in Students’ Academic Work. College & Research Libraries, 80(1), 123- 139.

33. Mertler, C. A. (2000). Designing scoring rubrics for your classroom. Practical Assessment, Research, and Evaluation, 7(25), 1-8.

34. Messick, S. (1992). Validity of Test Interpretation and Use. In M. C. Akin (Ed.), Encyclopedia of Educational Research (6th ed., pp. 1-29). New York: MacMillan.

35. Moskal, B. M., & Leydens, J. (2000). Scoring Rubric Development: Validity and Reliability. Practical Assessment, Research & Evaluation, 10(7), 1-6.

36. Nitko, A. J. (2001). Educational Assessment of Students (Vol. Third Edition). Upper Saddle River, NJ: Merrill.

37. Nunnally, J. C. (1978). Psychometric Theory (Second ed.). New York: McGraw-Hill.

38. Pallant, J. (2011). SPSS survival manual: A step by step guide to data analysis using SPSS (4th ed.). Crows Nest NSW: Allen & Unwin.

39. Pellegrino, J. W., Baxter, G. P., & Glaser, R. (1999). Addressing the "Two Disciplines" Problem: Linking Theories of Cognition and Learning with Assessment and Instructional Practice. Review of Research in Education, 24(1), 307-353.

40. Petkov, D., & Petkova, O. (2006). Development of Scoring Rubrics for IS Projects as an Assessment Tool. Issues in Informing Science and Information Technology, 3, 499-510.

41. Polit, D. F., & Beck, C. T. (2006). The Content Validity Index: Are You Sure You Know What’s Being Reported? Critique and Recommendations. Research in Nursing & Health, 29(5), 489-497.

42. Raubenheimer, J. (2004). An Item Selection Procedure to Maximise Scale Reliability and Validity. Journal of Industrial Psychology, 30(4), 59-64.

43. Reddy, M. Y. (2011). Design and development of rubrics to improve assessment outcomes. A pilot study in a Master’s level business program in India. Quality Assurance in Education, 19(1), 84-104. 44. Rodriguez, M. C. (2016). Selected-Response Item Development. In S. Lane, M. Raymond, & T.

Haladyna (Eds.), Handbook of Test Development (2nd ed., pp. 259– 273). New York: Routledge. 45. Rosken, B., & Rolka, K. (2006). A Picture is Worth A 100 Words - The Role of Visualization in

Mathematics Learning. Proceedings of the 30th Conference of the International Group. 4, pp. 457-464. Prague: PME.

46. Scully, D. (2017). Constructing Multiple-Choice Items to Measure Higher-Order Thinking. Practical Assessment, Research & Evaluation, 22(4), 1-13.

47. Shrotryia, V. K., & Dhanda, U. (2019). Content Validity of Assessment Instrument for Employee Engagement. SAGE Open, 9(1), 1-7.

(20)

48. Streiner, D. L. (1994). Figuring out factors: the use and misuse of factor analysis. Canadian Journal of Psychiatry, 39(3), 135-140.

49. Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Statistics (Fifth ed.). Upper Saddle River, NJ: Pearson Education, Inc.

50. Taherdoost, H. (2016). Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research. International Journal of Academic Research in Management, 5(3), 28-36.

51. Tan, G. G., Abang, M. H., & Farah, L. A. (2017). Relationship between Students’ Diagnostic Assessment and Achievement in a Pre-University Mathematics Course. Journal of Education and Learning, 6(4), 364-371.

52. Van Meter, P., & Garner, J. (2005). The Promise and Practice of Learner-Generated Drawing: Literature Review and Synthesis. Educational Psychology Review, 17(4), 285-325.

53. Wainer, H., & Braun, H. I. (1988). Test Validity. Hillsdale, NJ: Lawrence Erlbaum Associates Publishers.

54. Waltz, C. F., Strickland, O. L., & Lenz, E. R. (2005). Measurement in Nursing and Health Research (Third ed.). New York: Springer Publishing Company.

55. Wileman, R. E. (1993). Visual Communicating. Englewood Cliffs, NJ: Educational Technology Publications.

56. Yeh, H. T., & Lohr, L. (2010). Towards Evidence of Visual Literacy: Assessing Pre-service Teachers’ Perceptions of Instructional Visuals. Journal of Visual Literacy, 29(2), 183- 197.

57. Yenawine, P. (1997). Thoughts on Visual Literacy. In J. Flood, S. B. Heath, & D. Lapp (Eds.), Handbook of Research on Teaching Literacy through the Communicative and Visual Arts (pp. 845-846). New York: Macmillan Library Reference.

58. Zulkifli, M. N., Norain, F. A., Norngainy, M. T., & Firdaus, M. H. (2015). Student achievement at pre-university level: Does it influence the achievement at the University? Journal of Engineering Science and Technology, 10(2), 68-76.

Referanslar

Benzer Belgeler

Öğretide, bundan farklı olarak, eşitlik ilkesinin iki yönü arasında bir nitelik farkı gözetmeksizin, eşit davranma borcunun daha ziyade işverenin işçileri arasında

Bütün bu örnekler I (ı~i) alternans kullanımının ET döneminden beri var olduğunu göstermekle birlikte hangisinin daha eski olduğunu kestiremediğimiz, Arkaik

Hayatını memleket dışında ge­ çirmiş olan büyük vatanperver ve değerli içtimaiyatçı Prens Sa­ bahattin Beyin ölümünün üçün­ ki yıldönümü

Türkiye’de, ilköğretim ve ortaöğretim kurumlarında meslekî eğitim ve meslekî rehberlik alanında yapılacak çalışmalara yön veren yasal düzenlemelerden biri olan

Kitabın dördüncü (ss. 185-246) bölümünde “Büyük devletlerin siyasi münasebetleri kapsamında Ermeni meselesi ve Ermenistan-Azerbaycan münakaşası”

ABD’nin soğuk savaş sonrasında bireysel hareket ederek uluslararası sistemde diğer güçleri geri plana atmasının bir sonucu olarak da, NATO’nun en

Dörücü ve İspir [13] Keban Baraj Gölü’nden avlanabilen balık türlerinde iç paraziter hastalıkların incelenmesi çalışmalarında 9 tür balıktan 170 adet incelenmiş

1992-93 yılında, nurettin Sözen’in belediye başkanlığı döneminde İstanbul Büyükşehir Belediyesi tarafından düzenlenen “Açık Alanlara Üç Boyutlu Çağdaş