• Sonuç bulunamadı

View of Assessing the Quality of MCQs in the Final Examination of UUM Foundation Students

N/A
N/A
Protected

Academic year: 2021

Share "View of Assessing the Quality of MCQs in the Final Examination of UUM Foundation Students"

Copied!
8
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Research Article

Assessing the Quality of MCQs in the Final Examination of UUM Foundation Students

Nurul Syazanabinti Hishamuddin*1, Rukhaiyah binti Haji Abd Wahab2, Nur Azuliabinti Kamarudin3,

Amirul Husni bin Affifudin4, Amirul Haqeem bin Abd Ghani5

1,4School of Tourism, Hospitality and Event Management, UUMCOLGIS 2SchoolofLanguages, Civilisation & Philosophy, UUMCAS

3SchoolofQuantitativeSciences, UUMCAS 5SchoolofIslamicBusiness, UUMCOB

*Corresponding Author: 1*syazana@uum.edu.my

Article History: Received: 10 November 2020; Revised: 12 January 2021; Accepted: 27 January 2021; Published online: 05 April 2021

Abstract: The multiple-choice question (MCQ) format is commonly used to assess student knowledge as it is able to

accommodate a large number of participants. The advantages of MCQs are that they are easy to handle and the results are able to be obtained quickly. To ensure that students are effectively tested, it is important to analyse the quality of MCQs by assessing the questions or items using educational measurements. The aim of this study is to assess the quality of MCQs of final exam questions of foundation students by using the Difficulty Index and Discriminant Index. The sample for this study is the Introduction to Philosophy final exam paper which is one of the courses taken by foundation students. The analysis involves 33 graded final exams and 20 MCQ items. The findings showed that there are 3 items that are ‘good’ that can be retained, 9 items that are considered as ‘fair acceptable/not acceptable’ which can be retained or revised, and 8 items that should be revised or discarded. The results of this study show that it is important to assess the quality of items in the assessment as it is able to change the selection of items in the assessment.

Keywords: Item difficulty, item discriminant, assessment, final exam

1. Introduction

The aim of this study is mainly to understand whether final exam questions developed for foundation programme students have met the requirements of the course. This is important to ensure that the student gets assessed at their level of knowledge and understanding. Common misunderstandings on the definition of evaluation and assessment have led to teachers sometimes constructing questions that are valid to the subject matter but are not appropriate to be given to the desired student.

These misunderstandings involve the difficulty level of the question and the discrimination ability of the question sets. Understanding the importance of constructing valid and reliable questions is a must for all teachers and lecturers alike. Being able to assess a student’s ability using well-checked tools is not just a benefit to the student but is most importantly fair for them.

Therefore, this study explores the particular subject set of questions using the Difficulty Index, Discriminant Index and Distractor Efficiency to find evidence of whether the question set is reliable. The findings will later determine whether some of the questions need to be excluded or need adjustment to meet the ideal conditions of a reliable question.

2. Literature Review

There is much literature on item difficulties and the discrimination index in the area of education. However, the literature in assessing specific foundation level programmes of study is still scarce in mainstream literature. Moreover, at the foundation level of study, the case of Malaysian foundation students in Public universities makes this study unique and worth exploring.

Item difficulty is concerned about the difficulty of a text question. Due to the fact that tests or examinations require more than one item, item difficulty can be measured through the item difficulty index, which measures the level of difficulty of any given item. There are many ways to measure perceived item difficulty, but the most optimal method of measuring the index is through dichotomizing between the highest and the lowest results (Bratfisch, Borg, et al., 1972, p. 9). Experiments were also conducted to see if educators who developed items could estimate item performance for their students (Impara & Plake, 1998). Results showed that although educators could accurately estimate item performance, their accuracy level was not high.

(2)

The item difficulty concept is widely used in education and other fields such as psychology in order to test established models. Dawber, Rogers and Carbonaro(2009)paired item difficulty and discrimination with classical and item response theory models. Pardos & Heffernan (2011) decided to introduce item difficulty to the knowledge tracing model. Reckase & McKinley (1983) redefined both item difficulty and discrimination in accordance with the Multidimensional Item Response Theory Models. Even in Psychology, item difficulty has influenced certain psychological models such as the binomial test models (van der Linden, 1979). We see a pattern here where item difficulty is usually paired with another measurement method for item discrimination.

Item discrimination is conducted to understand how a test question, or an item can differentiate between students who have strong knowledge of the subject that is being tested and those who are weak. These two groups of students can also be categorized as high-performance students and low-performance students. The item discrimination index enables us to identify whether a test question or an item discriminates correctly between low performance and high-performance students. The latter are commonly expected to mostly answer the questions correctly if compared to low performers.

Ebel (1967, p. 126) mentioned that the index of this high-low or ‘upper-lower’ difference is also designated by the symbol D. The Item discrimination index (D) is defined as:

‘the difference in proportions of correct response between two extreme groups in the distribution of total test scores; 27% of examinees who receive the highest scores and the 27% who receive the lowest scores.’ (Ebel 1967:126).

This D index is considered the simplest amongst other item discrimination indices such as the tetrachoric coefficients of correlation, phi or fourfold point correlation coefficients, biserial coefficients, point biserial coefficients, and the Davis discrimination indices (Engelhart, 1965). A high discrimination result for any of the indices above is very much sought after in traditional test theory due to its correlation with item quality. ‘The higher the item’s discrimination, the better the item is supposed to be’ (Masters, 1988). Therefore, item discrimination and item difficulty are both utilized to investigate the quality of an item. This endeavour to establish as much empirical data on item quality has driven researchers to many case studies throughout the years.

Other than the field of education, studies on medicine and health have produced many case studies pertaining to the investigation of item quality through item difficulty and item discrimination. Sim & Rasiah (2006) explored the relationship between the difficulty and discrimination levels of the true/false-type multiple-choice questions (MCQs) of a multidisciplinary test paper for the para-clinical year of an undergraduate medical programme in the Department of Pharmacology, University of Malaya. A more recent case study was done in 2018 by the Department of Physiology, Khartoum University where both item difficulty and item discrimination were used to examine ten physiology MCQ test papers (Musa et al., 2018). Their investigations have shown that in order to maximise exam discrimination, the numbers of outliers (very difficult and very easy items) have to be reduced.

Currently, there is no literature on assessing examinations at the undergraduate level in any of the Public Universities in Malaysia. Most of the literature that is available investigates the quality of research (Amzat et al., 2010), difficulties faced by foreign students (Ali Alghail & Ali Mahfoodh, 2016) and university-industry collaborations (Azman et al., 2019). These topics are the trend of research now for Public Universities in Malaysia. Nevertheless, it is hoped that this article will renew interest in the study of assessments.

3. Methodology

This study uses a descriptive quantitative approach conducted on students at the Centre for Foundation Management. The course involved in the current study is from the School of Languages, Civilisation and Philosophy (SLCP), UUM. The study involved 33 foundation students out of 204 of the total population of cohort 6. All these 33 students are from group E of the cohort. Data collected for this study were student responses for the randomly selected 20 items out of 30 items of multiple-choice questions with four options for each item. As beginners for Philosophy class, the questions specifically being tested on basic and introductory level such as first; basics of general philosophies, second; basic concepts and theoretical frameworks of philosophy as a discipline and finally; various models of Islamic, Western and Eastern philosophical callings. The data were analysed using three types of analysis: Difficulty index (DIF I), Discriminant index (DI) and distractor efficiency (DE).

(3)

Before the analysis, the 33 students’ grades were arranged based on the lowest and highest results. Each group had 27% of the total students which meant there were 9 students in each group. The 9 students in the Upper Group were those who scored the 9 highest marks while the 9 students in the Lower Group were those who scored the 9 lowest marks.

Item difficulty

Item difficulty is concerned about the difficulty of a text question. Due to the fact that tests or examinations require more than one item, item difficulty can be measured through the item difficulty index which measures the level of difficulty of any given item. According to Ebel (1965)there are five category to determine the index of difficulty: DIF 0.91 and above is considered as “Very Easy”; 0.76 to 0.90 is considered as “Easy”; 0.26 to 0.75 is “Optimum Difficulty”; 0.11 to 0..25 is “Difficult”; 0.10 and below is considered as “Very Difficult”. The formula to calculate the difficulty index (DIF I) is:

(𝑈 + 𝐿

𝑁 ) 𝑥 100 = 𝐷𝐼𝐹 𝐼 U= Upper group

L = Lower group

N = Total number of examinees in both groups DIF I = Difficulty index

Item discrimination

Item discrimination is conducted to understand how a test question, or an item can differentiate between students who have strong knowledge of the subject that is being tested and those who are weak. These two groups of students can also be categorized as high-performance students and low-performance students. The item discrimination index enables us to identify whether a test question or an item discriminates correctly between low performance and high-performance students. The latter group are commonly expected to mostly answer questions correctly compared to the low performers.

The formula for the discrimination index (DI) as follows: (𝑈 − 𝐿

𝑁 ) 𝑥 2 = 𝐷𝐼 U= Upper group

L = Lower group

N = Total number of examinees in both groups DI = Discriminant index

A DI value that is over 0.40 is considered “high'', 0.20 to 0.39 are categorised as “moderate”, 0.15 to 0.24 are “marginal” and less than 0.15 is “poor”.

Next, the results of DIF and DI were evaluated to categorize the item condition whether to revise, retain or reject. In order to categorize an item as “good and acceptable”, the DIF is “high” and DI is “optimum difficulty” therefore the item should be retained. For an item that is “fair and acceptable” the DIF is “high or moderate”, DI is “difficult or optimum difficulty or easy”, it is recommended to retain this item. Items that are categorized as “fair and not acceptable” have DIF “high or moderate” and DI “difficult or optimum difficulty or easy”, this item should be revised. Lastly, an item that is “poor acceptable or not acceptable” has low DIF and the DI is “difficult or optimum difficulty or easy” should be revised or discarded.

Distractor analysis

Distractor analysis is measured by distractor efficiency. It is done to evaluate how many students in the lower and upper group select each option on the multiple-choice item. The distractor is expected to have negative discrimination and to be determined by at least 5 percent of the students.

(4)

4. Findings

The data have been calculated in Microsoft Excel using the formulas and the results are tabulated in the table below:

Table 1.Results of Index of Difficulty and Discriminant Index of MCQ

Difficulty Index (DIF I) Category Total number of item/20

10% and below Very Difficult -

11%-25% Difficult 1

25% to 75% Optimum Difficulty 15

76% to 90% Easy 3

91% and above Very Easy 1

Discrimination Index (DI) Category Total number of item/20

0.40 and above High 3

0.20 to 0.39 Moderate 9

0.19 and below Low 8

The items can be categorised based on their quality. For items categorised as ‘acceptable’, the item must have a high discrimination level and optimum difficulty level. Good items may be retained for the assessment. For an item categorised as ‘fair’ and ‘acceptable’, the discrimination level is high or moderate and the difficulty level is between optimum difficulty or is easy or difficult. The item may be retained for the assessment. An item that is ‘fair’ and ‘not acceptable’ has a ‘high’ or ‘moderate’ Discrimination level and has an ‘optimum difficulty level’ or is ‘easy or ‘difficult’. This item should be revised. Lastly, items that are considered as ‘poor’ and maybe ‘acceptable’ or ‘not acceptable’ have a low discriminant level and can be ‘optimum difficulty’ or ‘easy’ or ‘difficult’. The item that is considered poor must be revised or discarded. The table below shows the item category based on the result of the Difficulty index and Discriminant index.

Table 2.Categorisation of the MCQs items

Item category Number of items Items

Good items 3 Q1, Q7 and Q17

Fair acceptable/not acceptable 9 Q2,Q3,Q5,Q6,Q11,Q13,Q14,Q15,Q18

Poor items 8 Q4,Q8,Q9,Q10,Q12,Q16, Q19,Q20

A standard to interpret the reliability of tools is given by George & Mallery, (2003). The interpretation of Cronbach’s Alpha reliability coefficient is categorized into several classes, which are ‘excellent’ for value greater than 0.90, ‘good’ for values from 0.80 to 0.89, ‘acceptable’ for values from 0.70 to 0.79, ‘questionable’ for values from 0.60 to 0.69, ‘poor’ for values from 0.50 to 0.59 and lastly, ‘unacceptable’ for values less than 0.50. The Cronbach’s Alpha reliability coefficient for all items are given in Table 3

Table 3.Cronbach’s Alpha reliability coefficient

Item Cronbach Alpha

1 0.6363

2 0.6066

3 0.6052

(5)

5 0.6694 6 0.6484 7 0.642 8 0.6168 9 0.6159 10 0.6507 11 0.649 12 0.6502 13 0.6296 14 0.6509 15 0.6412 16 0.6484 17 0.6583 18 0.6407 19 0.6494 20 0.6636 All 0.6527

5. Discussion and conclusion

The aim of the MCQ is to test the knowledge, comprehension, application, and analysis of the students. The JSU (see appendix I) and table of specification as below:

Table 4.Table of Specification Multiple-Choice Question

Number of Objectives Level of

Objective

Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Total

Knowledge 1 1 1 1 4 (20%) Comprehension 1 3 4(20%) Application 1 4 5 (25%) Analysis 2 2 2 1 7(35%) Synthesis Evaluation Total 3 5 4 7 1 20(100%)

The MCQ is a traditional assessment. However, it is still a relevant tool that has been used in undergraduate settings in many fields (Fowell et al., 2000; McCoubrie, 2004). The advantages of MCQ are that they are easy to administer, score and analyze. It is important to analyse the quality of the items to ensure that the score is valid for the level of student understanding. In order to analyse the item quality, the study uses the Index of Difficulty (p) and the Discriminant Index (D) by Ebel (Kolte, 2015). The discriminant index will prove the Index of Difficulty is precise(Aiken, 1979). Each MCQ item has been evaluated based on DIF I, DI, and DE because if an item is flawed then this itself becomes distracting and the assessment can be false.

Ebel (1965) listed out that a DIF I of 0.91 and above is considered very easy, a DIF I = 0.76 to 0.90 is easy, a DIF I = 0.26 to 0.75 has an optimum difficulty, a DIF I = 0.11 to 0.25 is difficult, and a DIF I = 0.10 and below is very difficult. The analysis of the items showed that most of the items (15 items or 80%) are categorised as optimum.Therefore, it can be said that the questions are reliable and acceptable for the level of the students’ knowledge. The item analysis showed that of the set of 20 questions, there was only one item each which could be classified as very easy (Question 12), easy (Question 6) and difficult (Question 19). Meanwhile, the

(6)

in the acceptable range (DI = 0.2 to 0.4 >). Furthermore, based on the analysis of the index of difficulty and the discriminant index of the MCQ, only 5% or 3 items could be retained.

Overall, the students’ performances reflect the level of difficulty or discrimination of the items set. The mean value gives us a rough guide to the difficulty of the test, where a high mean indicates a test the students found easy while a low mean indicates a more difficult test. A test with a high standard deviation indicates that the test takers were very diverse compared to a test with a low standard deviation (Sewagegn, 2019). Items which are too difficult may lead to deflated scores while easy items may lead to inflated scores. Furthermore, the high DIF I items should be stated either at the beginning of the question or discarded. Similarly, the low DIF I items should be revised or discarded.

Meanwhile, for the Cronbach’s Alpha reliability coefficient, the overall reliability coefficient is 0.6527 with a range of 0.6052 to 0.6694 for each item as shown in Table 3. Hair et al. (1998) include that while a value of 0.70 is agreed upon as acceptable, values as low as 0.60 might be acceptable for exploratory testing. A commonly accepted rule is that 0.60-0.70 implies an appropriate level of reliability, and a very strong level of 0.8 or higher. Therefore, this shows that the items of the assessment are suitable for all the foundation students in this course.

From the discussion, it showed that the instructor should revise all the tested MCQ items. Indeed, the difficulty index are related with the discrimination index. Items that have high discrimination index are considered as good item. While the item that have optimum, easy and very easy difficulty index are unacceptable although it is moderate or low discrimination index. Therefore, for the revision of the test, the instructor should consider revising the questions that are fair and poor. Instructors should use the analysis as the guidance for the next assessment design. Moreover, a consistent item analysis is required as it relates to student performance.

Furthermore, the design of distractors influenced student performance (Dufresne et al., 2002). The distracter analyses used the data from p and D to scrutinize items. For an item to work well, the distractor must be at least selected by 5% of the test makers (Kolte, 2015). The total number of distractors were 80 items (4 per item) out of which 29 % (23 items) were Functional Distractor and 71% were Not Functional Distractor.

In conclusion, quality is the underlying goal that drives instructors to continuously review their own work. The thing that makes this item analysis process special is the fact that teachers and lecturers not only review their work but review it after receiving input from their own students. Students may or may not be oblivious to their contribution or the process itself. Regardless, item analysis allows educators and their students to work together in the pursuit of quality through the investigation of item difficulty and item discrimination. Hence, by doing this, a viable question bank can be set up.

References

1. Aiken, L. R. (1979). Relationships Between the Item Difficulty and Discrimination Indexes.

Educational and Psychological Measurement, 39(4), 821–824.

https://doi.org/10.1177/001316447903900415

2. Ali Alghail, A. A., & Ali Mahfoodh, O. H. (2016). Academic reading difficulties encountered by international graduate students in a Malaysian university. Issues in Educational Research, 26(3), 369. 3. Amzat, I. H., Yusuf, M., & Kazeem, B. K. (2010). Quality Research Supervision in Some Malaysian

Pubic Universities: Supervisees’ Expectations and Challenges.

https://papers.ssrn.com/abstract=1668172

4. Azman, N., Sirat, M., Pang, V., Lai, Y. M., Govindasamy, A. R., & Din, W. A. (2019). Promoting university–industry collaboration in Malaysia: Stakeholders’ perspectives on expectations and impediments. Journal of Higher Education Policy and Management, 41(1), 86–103. https://doi.org/10.1080/1360080X.2018.1538546

5. Bratfisch, O., Borg, G., & Dorvic, S. (1972). Perceived Item-Difficulty in Three Tests of Intellectual Performance Capacity. https://eric.ed.gov/?id=ED080552

6. Dawber, T., Rogers, W. T., & Carbonaro, M. (2009). Robustness of Lord’s Formulas for Item Difficulty and Discrimination Conversions between Classical and Item Response Theory Models. Alberta Journal of Educational Research, 55(4), 512–533.

7. Dufresne, R. J., Leonard, W. J., & Gerace, W. J. (2002). Marking sense of students’ answers to multiple-choice questions. The Physics Teacher, 40(3), 174–180. https://doi.org/10.1119/1.1466554 8. Ebel, R. L. (1965). Measuring educational achievement. Prentice-Hall.

9. Ebel, R. L. (1967). The Relation of Item Discrimination to Test Reliability. Journal of Educational Measurement, 4(3), 125–128.

(7)

10. Engelhart, M. D. (1965). A Comparison of Several Item Discrimination Indices1. Journal of Educational Measurement, 2(1), 69–76. https://doi.org/10.1111/j.1745-3984.1965.tb00393.x

11. Fowell, S. L., Maudsley, G., Maguire, P., Leinster, S. J., & Bligh, J. (2000). Student assessment in undergraduate medical education in the United Kingdom, 1998. Medical Education, 34 Suppl 1, 1–49. https://doi.org/10.1046/j.1365-2923.2000.0340s1001.x

12. George, D., & Mallery, P. (2003). SPSS for Windows Step by Step: A Simple Guide and Reference, 11.0 Update. Allyn and Bacon.

13. Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (1998). Multivariate data analysis (Vol. 5, Issue 3). Prentice hall Upper Saddle River, NJ.

14. Impara, J. C., & Plake, B. S. (1998). Teachers’ Ability to Estimate Item Difficulty: A Test of the Assumptions in the Angoff Standard Setting Method. Journal of Educational Measurement, 35(1), 69– 81.

15. Kolte, D. V. (2015). Item analysis of Multiple Choice Questions in Physiology examination.

16. Masters, G. N. (1988). Item Discrimination: When More Is Worse. Journal of Educational Measurement, 25(1), 15–29.

17. McCoubrie, P. (2004). Improving the fairness of multiple-choice questions: A literature review. Medical Teacher, 26(8), 709–712. https://doi.org/10.1080/01421590400013495

18. Musa, A., Shaheen, S., Elmardi, A., & Ahmed, A. (2018). Item difficulty & item discrimination as quality indicators of physiology MCQ examinations at the Faculty of Medicine, Khartoum University. 19. Pardos, Z. A., & Heffernan, N. T. (2011). KT-IDEM: Introducing Item Difficulty to the Knowledge

Tracing Model. In J. A. Konstan, R. Conejo, J. L. Marzo, & N. Oliver (Eds.), User Modeling, Adaption and Personalization (pp. 243–254). Springer. https://doi.org/10.1007/978-3-642-22362-4_21

20. Reckase, M. D., & McKinley, R. L. (1983). The Definition of Difficulty and Discrimination for Multidimensional Item Response Theory Models. https://eric.ed.gov/?id=ED228288

21. Sewagegn, A. (2019). A Study on the Assessment Methods and Experiences of Teachers at an Ethiopian University. International Journal of Instruction, 12, 605–622. https://doi.org/10.29333/iji.2019.12238a

22. Sim, S.-M., & Rasiah, R. I. (2006). Relationship between item difficulty and discrimination indices in true/false-type multiple choice questions of a para-clinical multidisciplinary paper. Annals of the Academy of Medicine, Singapore, 35(2), 67–71.

23. van der Linden, W. J. (1979). Binomial Test Models and Item Difficulty. Applied Psychological Measurement, 3(3), 401–411. https://doi.org/10.1177/014662167900300311

(8)

Table 1. JSU for Multiple-Choice Questions table

TABLE OF SPECIFICATION

SCHOOL OF LANGUAGES, CIVILISATION AND PHILOSOPHY

LECTURER'S NAME : DR. RUKHAIYAH BINTI HAJI ABD WAHAB COURSE : AD 0033 INTRODUCTION TO PHILOSOPHY

SEMESTER I 2019/2020 (A191) TEST DURATION: 2 & 1/2 HOURS

CONTENTS

LOW LEVEL (QUESTION) MID LEVEL (QUESTION) HIGH LEVEL (QUESTION)

Remembering Understanding Applying Analyzing Evaluating Creating

3 5 4 7 1 Total Item Total Question Based on Level Percentage of Grade Based on Level Upon completion of the course, students are able to:

ITEM W E IG H T TO TA L Q U E S TI O N S N u m b er of Q Q u es ti o n G ra d e C L O N u m b er of Q Q u es ti o n G ra d e C L O N u m b er of Q Q u es ti o n G ra d e C L O N u m b er of Q Q u es ti o n G ra d e C L O N u m b er of Q Q u es ti o n G ra d e C L O N u m b er of Q Q u es ti o n G ra d e C L O 1 1 8 1 C3 1 10 1 P2 Importance 1 10 1 A1 2 1 11 1 C3 1 12 1 C2 1 15 1 C2 1 13 1 C3 1 14 1 C3 3 1 18 1 C2 1 16 1 A1 1 17 1 A2 1 19 1 C3 4 1 1 1 P2 1 2 1 P2 1 3 1 A3 1 4 1 C1 1 5 1 C3 1 6 1 C3 1 7 1 C3 5 1 20 1 A3 6 20 4 3 4 4 5 5 7 7 0 0 0 0 8 12 0 7.0 12.0 0.0

Prepared by: Dr. Rukhaiyah Binti Haji Abd Wahab Checked by:

Date: 4-Nov-19 Date:

CHAPTER 1 INTRODUCTION

CHAPTER 2 BRANCHES AND CONCEPTS OF PHILOSOPHY

CHAPTER 3 THE RELATIONS OF PHILOSOPHY

CHAPTER 4 FUNCTIONS OF PHILOSOPHY

CHAPTER 5 APPLICATION OF PHILOSOPHY

CHAPTER 6 BASIC PARADIGMS IN ISLAMIC, WESTERN AND EASTERN PHILOSOPHICAL THOUGHTS

Etymological and Meaning of Philosophy

The Branches of Pure Philosophy

Religion in Philosophy Human and Society in Philosophy Science and Philosophy

Descriptive Definition Meaning Nature Speculative Phenomenology Analytic Rational Critical Logical Development Moral Judgement Social Consciouness

The Uniqueness of Islamic, Western and Eastern Philosophy

Islamic Philosophical Thoughts Western Philosophical Thoughts Eastern Philosophical Thoughts The Branches of Applied Philosophy Some Concepts of Philosophy

Prescriptive

CLO1 Describe the realmof general knowledge of philosophy (C1, P2, A1)

CLO2 Generalize with some relative ease the elementary concepts, logics, laws and terms of philosophy and reasoning (C2, P2, A2) CLO3 Apply the basic principles of philosophy to both academic life and real life experiences (C3, P2, A3).

Referanslar

Benzer Belgeler

[r]

[r]

Although the routine urine analysis, include both urine dipstick and microscopy examination, can provide much clinical information, but only the leukocyte-esterase and

Osmanlıcanın türkçe olmasını 19 uncu yüz yılın ikinci yarısından sonra yukardaki üç edebiyat- sommi te’si konuşurken, yazarken hattâ susarken bile

tarafından Gaziantep yöresinde yapılan başka bir çalışmada Rh pozitifliği (% 90.83) olarak bildirilmiştir (8). Beş ayrı çalışmada da bildirilen yüksek Rh

E lli befl akut iskemik inme ve yirmi geçici iskemik atak ol- gusunun serum S100B protein düzeylerinin karfl›laflt›r›l- d›¤› bu çal›flmada, akut iskemik inme

Türk milleti yaşadığı tarihî tecrübe itibarı ile göç kavramına aşina bir millettir. 2011 yılında Suriye‟de patlak veren iç savaşın ardından Türkiye

Hoca Ahmet Yesevî, “Türk dünyasının manevi hayatında asırlardır tasarrufu devam eden ve “Pir-i Türkistan”,” Hazret-i Türkistan” olarak anılan büyük bir