• Sonuç bulunamadı

View of A Critical Review on Recent Proposed Automated Programming Assessment Tool

N/A
N/A
Protected

Academic year: 2021

Share "View of A Critical Review on Recent Proposed Automated Programming Assessment Tool"

Copied!
11
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Research Article

A Critical Review on Recent Proposed Automated Programming Assessment Tool

Muhammad Huzaifah Ismail1, Muhammad Modi Lakulu2*

1,2*Department of Computing, Sultan Idris Education University, Malaysia

modi@fskik.upsi.edu.my2*

Article History: Received: 10 November 2020; Revised: 12 January 2021; Accepted: 27 January 2021; Published online: 05 April 2021

Abstract: Automated assessment tool for programming assignments has been discussed extensively in education technology ever since of its existence. The tool aims to reduce the work load of instructors as well as providing an instantfeedback to students in pertaining to assessing programming assignments. Myriad versions of the tool accompanied with multiple features has been proposed till this day in order to cater modern trend of education technologies. Many previous studies have analyse on these existing tools but not on recent one. This study have conducted a critical review on the recent proposed tool based on its pedagogical and technical aspect. As a result of this study, more recent proposed tool are identified and its challenging issues are also highlighted.

Keywords: Education Technology, Automatic Assessment, Programming Assignments, Tool Analysis 1. Introduction

Programming in the context of computing, is an activity of writing instructions to tell the computer how to process specific information [1]. It is an essential practical skill that needs to be obtained especially by those who want to pursue a career in Computer Science (CS) field. This is because the ability of programming contribute many things in CS field especially in the software development. Therefore, undergraduate students who enrol in CS course at University will involve with many programming assignments. Typically for the introductory programming course, the instructor will design the programming assignment that requires the students to develop a functional program where it can perform problem-solving. Therefore, the student have to develop a functional program that executes correctly as specified within the assignment outline as part of their classroom assessment task.

However, assessing these programming assignments manually arises many issues which including inaccuracy of assessment[2] andmore effort and time consuming[3]especially with large number of studentsenrolment. The reason for this issues to occur is because the nature of programming consists of codes that have different logics and constructs but still can producing the same output[4]. In other words, student can apply multiple approaches in programmingbutstill it can lead to the same expectedsolution. Therefore, it is difficult for the instructor to determine the correctness of a programming assignment and it can become a slow tedious process because the code of the student’s program have to be examine line by line and test it thoroughly whether it follows the specified requirement. To address this issues, researchers have come out a solution known as Automatic Programming Assessment Tool (APAT)which is a softwaretool that can evaluate student’s programming assignment automatically.

In order to cater modern trend of education technologies, myriad versions of APATaccompanied with innovative features are proposed till this day. Many previous studies have analyse on these existing APAT but not on recent one. Thus, this study have conducted a critical review on the recent proposed APAT from its pedagogical and technical aspect. The novelty of this study is that more recent proposed APAT are identified and highlighted. In addition the challenging issues with the recent trend of APAT development are also discussed throughout this study. As a result, the highlights of these details allows to determine the research gaps of APAT for future studies to address.

The remainder of the paper is organized as follows. Thesecond section are discussing briefly on the background of APAT. While, the third section highlights the related work of this study. The fourth section on the other handdemonstrating the methodology of the study from identifying the recent proposed APAT to classifying its pedagogical and technical aspect.The fifth section presents the discussion that relates to recommended studies for future research of APAT. Lastly, the fifth section concludes the study.

(2)

885 2. Background

The very first APAT was introduce in 1960 by Hollingsworth where the correctness of the student program functionality is evaluated automatically [3], [5]. Fundamentally, how the proposed APAT works is that the grader program was run against the student program and produce two types of results which either “wrong answer”(for rejected solution) or “program complete”(for successful solution). The proposed APAT was specifically designed to evaluate assembly programming language.Henceforth, the automatic evaluation feature has always been the served as the main goal of APAT and later on has been extended to other features as well such as test data generations[6]–[8], semi-automatic approach of assessment[9], [10], plagiarism detection [11], [12], automated feedback messages [13] and so forth.

The Three Types of Analysis

Automatic evaluation feature has always been the integral part of the APAT since of its introduction. Automatic evaluation feature comprises of three types of analysis which are dynamic, static and hybrid analysis. Each of the analysis evaluate student’s program on different grading criteria. The general grading criteria that instructor usually applied on assessing programming assignments includeprogram execution, program specifications, program design, coding style, comments, and creativity [14]. So, it is crucialfor the instructors aware of these types of analysis on its capability and its limitation so they able to choose the suitable APAT for their usage.

Dynamic Analysis

The dynamic analysis (DA)was based on the first proposed APAT where the student’s program is executed and the evaluation is based on the comparison between the output produced by the student’s program and the expected output[15] that either provided manually by the instructor or automatically generated[6], [16]. Simply put, DA is the same concept as applying the black-box and grey-box that has been highlighted in software testing methodology [17]. DAwasdesign to cater the program execution criterion whereby the student’s program should be able to compile and executes cleanly, and produce correct results. The correct results in the perspective of DA is that the student’s program was able to produce the desired output. Thus, this is how the correctness of student’s program is determined.

Technically, DA works by taking thesource code file as input and then compiles the code, a file containing a set of input test cases is fed to the program and the generated output from the program is then redirected to a file and it is compared with the file that contains output test cases[18]. If both of the output files passes the test cases comparison, then it is considered the student have successfully answered the programming assignment. Figure one demonstrate the flowchart diagram on how the DA works.

Figure 1. The flowchart of dynamic analysis

The main advantage of applying DA is that it is easy to implement in APAT and the testing can be non-technical [17] for the instructor to applied. What meant by this situation is that the APAT only have to analyse the student’s program by verifying the input and output matching test cases despite evaluating the whole program structure which requires more sophisticated mechanism.Nonetheless, the major drawback of implementing DA in APAT is that the risk of damaging the APAT itself and its system environment is high

(3)

since DA requires the execution of the student program in order to proceed its evaluation process. The execution of the student’s program could lead the APAT to execute a malicious program if there is no proper security measurement implemented[19]. Consequently, it is important for the APAT to embed with proper security mechanism such as sandboxing[20] explicitly in the implementation of DA.

Static Analysis

The static analysis (SA) on the other hand evaluate the student’s program without executing it[21] which are then used to provide feedback based on the correctness of the student’s program structure [20]. In comparison to DA, SAissafertoapply becauseit does not involve with the student’s program execution. As a result, SA automatically preventing the APAT from harming itself with malicious program. In addition, student programs that possessed compilation errors are also feasible to be evaluated with SA since it does not involve program execution [19]. SA covers more grading criteria of programming as opposed to DA that includes program specifications, program design, coding style, comments, and creativity as described before.

The operation of SA performs can applied many techniques as long it is able to evaluates the program structure. One of the techniques that SA primarily applies is call as structure similarity analysis. The SAthat carry out structure similarity analysistechnique operatesby calculating the similarity of the program structure between the student’s program and the solution model program[22].A solution model program essentially is a file that contains source code that is written and provided by the instructor. Once SA is performed, the correctness of the student’s program structure is determined based on how similar it is to the solution model program. The intermediate representation form can be either a graph, tree, or pseudocode which depending on what technique the structure similarity analysis used for the evaluation process. The typical techniques for structure similarity analysis mostly adopting are graph matching[17], [23], abstract tree syntax [24] and pseudocode comparison[25]. Figure two shows the flowchart of the basic mechanism of SA that is implemented in APAT.

Figure 2. The flowchart of static analysis

Despite SA has more advantages over DA which is safer to be applied, covers more grading criteria and able to evaluate student’s program that has compilation errors, it still has limitation of its capability. Its major limitation is the limitation of analysing of various approaches of student’s program structure[17], [22], [23]. As previously discussed (see section 1), the nature of programming is that the program structure can consists of codes that have different logics and constructs but still it can lead to produce the desired output. Then, it is difficult for the SA mechanism to be able to evaluate all types of program structure. There is no doubt that the

(4)

A Critical Review on Recent Proposed Automated Programming Assessment Tool

887 design of SA mechanism is more difficult to come out with especially ensuring complex program is feasible to be evaluated. This is when DA has advantages over SA because it evaluates solely on the output comparison rather than the program structure itself. Subsequently, SA is more suitable for less complex programs due to the fact that it is easier to perform the evaluations.

Hybrid Analysis

Last but not least, hybrid analysis (HA) which is a combination of DA and SA. The perks of applying HA in automatic evaluation feature allows the complementary of the strengths and limitations of bothof the analysistypes. A clear of the example of HA can be seen under Zougari et al[17], [23] study where the DA is initiated at the first stage of the evaluation process and followed by SA at the second stage.

As indicated in their study, the process of DA is to ensure the students program is able to execute and produce the desired by output through automated testing process. As usual, the DA mechanism requires a set of output test cases and it applies it for the evaluation process. While the process of SA in the study, the authors has adopted graph matching technique for the structure similarity analysis to process. Clearly, the implementation of the HA helps to improve the accuracy of the assessment since is not only able to perform output comparison for the evaluation but it also inspecting the program structure as well.

3. Related Work

Many previous studies have analysed the existing APAT and come out an interesting discussions on it.Ala-Mutka[21] have surveyed existing APAT based on its analysis types that consists of DA and SA (read section 2.1 for details). In addition, Ala-Mutka have mentions the limitation of DA which prefers to apply semi-automatic assessment approach as an alternative for the issues. Ala-Mutka also recommended the direction of APAT studies should be developed on globally scale instead of locally so that other universities courses can adopted it as well.

Douce et al [3] on the other hand, have classified three main generations of APAT development that are specifically applied DA for its automatic evaluation feature. The first generation is an initial attempt to automate the assessment of programming assignments activity. Later, the second generation is where the command-line-based of APAT started to existing. Lastly, the third generation of APAT are the web-command-line-based type.

Ihantola et al [20] study presents the key features of APAT that supported by the tools and the different approaches used both from pedagogical and technical point of view. Somehow, Ihantola et al works does not list out the tools aligning with the identified key features. Due to that, Souza et al [26] have address this gap by listing out the existing tools and perform classifications based on its features, interface type and programming languages supported.However, all of these related studies seems have not focusing on identifying and highlighting the specifications of more recent proposed APAT. Thus, this study would like to address this gap by analysing on recent proposed APAT as well as promoting recommended studies that should be focus in the future. The next sections explaining the on how this study applied its methodology.

4. Methodology

This study have selected 10 studies thatare specifically describing the proposed APATbased on its characteristics such as mentioning the tool name and its features. As aforementioned before, this study are targeting on the recent proposed APAT since no other studies have covered on it. Therefore, the search for the studies has been restricted to only from year 2014 till 2019.These identified studies are later referred as the selected tools which are used in table two. Table one presents the summarization of the identified recent proposed APAT according to the tool’s name, description, and the year of the article was published.

Table 1. Recent proposed APAT by other studies

Label Tool Names Tool Description Year

T1 Judge System

(UOJ)[27]

UOJ is an online web-based of APAT that is developed to assists instructors in terms of assessing programming assignments in addition to allows the students practices C++ programming language in UniversitiTeknologi PETRONAS. The UOJ was also built in with a swarm inspired automatic test case generation feature which are known as Particle Swarm Optimization (PSO) algorithm.

(5)

T2 Automata[28] Automata is a machine learning based of APAT that uses Latent Semantic Analysis (LSA) technique to assess the student program. To facilitate the automatic grading feature of the tool, a rubric was designed and integrated. The rubric outline the grading based on the correctness of the student’s program functionality.

2014

T3 FaSt-generator[29] FaSt-generator is an APAT that is specialized in automatic test case generation. The tool built on top of test data generation framework which was based on their previous work[8], [30]. The tool purpose is to generate an adequate set of test data that are used to execute both functional and structural testing for the evaluation process.

2015

T4 SAUCE[31] SAUCE as it stands for “System for AUtomated Code

Evaluation” is a web-based of APAT that assists instructors on assessing parallel programming assignments. It applies a lightweight queue system for running the submission tests on assessing the assignment submission. SAUCE has been tested on various examples of parallel programming which includes OpenMP, MPI and CUDA.

2015

T5 Ask-Elle[32] Ask-Elle is a tutor type of tool that support the stepwise development of simple functional program in Haskell.Ask-Elle is consider as an APAT due to its ability of evaluating the correctness of incomplete students programs, giving hints and allows instructors add particular feedback information on the programming exercises.

2015

T6 Programing Grading Assistant (PGA)[33]

PGA essentially is a mobile apps version of APAT that uses Qr-Code Scanner and Optical Character Recognition (OCR) mechanism to evaluate handwritten programming exercises. The researchers also design a framework namely Mobile Grading Framework (MGF)which later has been adopted in PGA system architecture.

2016

T7 Online Learning System[34]

The authors somehow does not specify the name of the proposed APAT and simply mention it as an online learning system. The online learning system is implemented on open source learning platform call Moodle and its automatic evaluation feature is based on Virtual Programming Lab (VPL) plugin.

2017

T8 Aristotle[35] Aristotle is an APAT that applies semi-automated assessment with the capability of assessing any types of programming languages. At the same it is integrated with GitHub Classroom which is a free web-based tool for the usage of instructors which was basically use for managing the assignments.

2017

T9 ASys[36] The authors of the articles has presented two major contributions in their studywhicharedeveloping a web-based APATthat they called as ASys and a new automatic evaluation feature that they refer as JavAssess. ASyspurpose is to assists instructors for assessing final exams and permits student using the tool for self-evaluate learning on Java programming.TheJavAssess is basically a java library that applies a HA of evaluation mechanism for evaluating java codes.

2018

T10 Paprika[37] Paprika which is the name of their proposed APAT applies HAfor its automatic evaluation feature. The usage of paprika have proven that it can produced correct scores and accurate feedback even in student’s program that has weak output. Weak output as describe by the authors is an output format that is dissimilar from the output required by the test case.

(6)

A Critical Review on Recent Proposed Automated Programming Assessment Tool

889 In table two, this study has perform a classifications from the 10 selected tools that are list out in table one. The classifications of these tools are based on two aspects which are the technical and pedagogical aspect. In the context of this study, technical aspect of APAT are elements that covers more on its functionality. The specifications of these elements helps in determining the recent trend of proposed APAT functionality and also provides details of improvement for new development of APAT in the future. Therefore, the technical aspect this study are emphasising on its novelty feature, analysis types that uses for its automatic evaluation feature, programming languages that supported and its assessment approach type whether its automatic or semi-automatic. In other words, the technical aspects that this study are focusing are mostly involving with the APAT mechanism.As for the pedagogical aspect of APAT, this study have specified whether the tools usesrubric for its grading and also the involvement of learning taxonomy. Both of these elements are significant to be applied in APAT especially for its pedagogical aspect. Essentially, rubric is a documentation that lays out the specific expectations of grading [38]. The importance of applying rubric for grading assignments is because of it capable of providing consistency of the grades [39]. So unfair grading towards the student’s assignment can be prevented if a rubric is use. While, learning taxonomy can be refer as the scheme for classifying educational goals, objectives, standards[40] and grading as well [4], [41]. The significant of using learning taxonomy in assessment helps in measuring the students learning performances.

Table 2. The classification of recent proposed APAT from its pedagogical and technical aspect Tools Novelty feature Analysis

type Programming Languages supports Assessment approach Applying rubric Applying Learning Taxonomy T1 Automatic test cases generations DA C++ Automatic No No T2 Machine learning approach

SA More than one languages Automatic Yes No T3 Automatic test cases generations HA Java Automatic No No T4 Assessing non-introductory programming course

DA More than one languages Automatic No No T5 Assessing non-typical programming languages DA Haskell Automatic No No T6 Assessing handwritten programming paper based SA Java Automatic No No T7 Plugin based system

DA More than one

languages

Automatic No No

T8 Plugin based

system

DA More than one

languages

Semi-automatic

No No

T9 HA approach HA Java Automatic No No

T10 HA approach HA More than one

languages

Automatic No No

5. Results and Discussions

From previous section, this study have classified the ten identified recent proposed APAT according to its pedagogical and technical aspects. The elements of both aspects have been specified throughout the classifications process.The followings are the findings based on the classifications that have been outline from the previous section:

(7)

Figure 3.Total number of tools based on its novelty feature

In the context of this study, novelty feature of APAT can be consider as afeature thatis designs to facilitate a particular scope of the study. Usually, the authors specifies the scope of study and the proposed APAT need be able to cater it and this can be refer as the novelty feature. From the findings of this study, seven categories of novelty feature are discovered and the results are presented in figure three. HA approach as previously discussed, are the type of analysis that combines both static and dynamic and from figure three, two out 10 tools which are T9 and T10 does possessed of this novelty feature. T7 and T8 on the other hand are specialize in plugin based architecture. Only one tool that is T6 has the capability of assessing handwritten programming paper based which is very unique as opposed to the rest which are more focusing on the digitalization. Besides that, one particular tool that is T5 is built specifically to assess Haskell programming language. It seems the tool aiming on assessing the non-typical programming language as comparing to the typical one such as C, C++, C#, Java and python. Another tool that has unique traits in relating to novelty feature that is T4 has the ability assessing of parallel programming which is the tool is focusing on more advance programming course. Last but not least, one tool that is T2 has incorporating a machine learning approach within their study.

 The findings of recent proposed APAT based on its analysis type

Figure 4. Total number of tools based on the adoption of analysis type

Based on figure four, clearly it can be seen that DA still highly adopted within their proposed tool as its automatic evaluation feature which is at 50%. The main issue of these tools that have adopted DA is that none of these are including any type of security mechanism such as sandboxing, CPU usage limitation, and so forth. This is crucial for the sake of these tools system environment since the potential of executing a malicious code is high. However, the tools that adopted SA and HA is at 20% and 30% which is lower compared to DA.

 The findings of recent proposed APAT based on its programming languages that the tools supported 2 1 1 1 1 2 2 0 1 2 3

Automatic test cases generations Machine learning approach Assessing non-introductory programming course

Assessing non-typical programming languages Assessing handwritten programming paper based

Plugin based system HA approach 5 2 3 0 1 2 3 4 5 6 DA SA HA

(8)

A Critical Review on Recent Proposed Automated Programming Assessment Tool

891 Figure 5.Total number of tools based on the programming languages supported

It seems that the recent trend of the tools are striving for supporting more than one type of programming languages as depicted in figure 5.The results in figure five shown that 50% of the tools has the ability of evaluating more than one programming languages. As for the tools that supported only one programming languages, Java has most applied within the tools which is at 30% and C++ only at 10% and same goes for Haskell which is at 10%.

 The findings of recent proposed APAT based on assessment type of approach

Figure 6. Total number of tools based on the types of assessment approach

The types of assessment approach in APAT can be implemented in two ways that is either fully automatic or semi-automatic. The semi-automatic assessment is basically the types of assessment approach that only performs partial assessment. This approach was presented to eliminate the limitation of the DA type of automatic evaluation feature where the evaluation was only based on the output comparison of test cases. Based on figure six, only one tool that is T8 have applied the semi-automatic approach mechanism.

 The findings of recent proposed APAT based on rubric usage

Figure 7.Total number of tools that uses rubric

From the results depicted in figure 6, there is no doubt that rubric seems very poorly applied in the recent proposed APAT since only T2 does include it in its automatic grading feature. Meanwhile, a rubric is an essential instrument for ensuring the grades are given consistently to any type of level of assignment as previously mentioned. The implementation of rubric in APAT helps to make the grades non-binary as oppose to previous grading mechanism especially the first APAT that was introduced (read section 2 for further details).

1 3 1 5 0 1 2 3 4 5 6

C++ Java Haskell More than one

languages support 9 1 0 2 4 6 8 10 Automatic Semi-automatic 1 9 Yes No

(9)

Moreover, the proposed APAT should have not consider non-binary grading since it is difficult for student to perform self-assessment.

 The findings of recent proposed APAT based on the application of learning taxonomy

Figure 8. Total number of tools that applied any learning taxonomy

Same goes for this situation, the awareness of applying learning taxonomy within the APAT seems very low. In fact, it is much worse since none of these tools have applied it as presented in figure seven. The importance of applying the learning taxonomy in APAT supports instructor in terms of monitoring students learning achievement. This results shows that these recent tools have neglected the essential element of an assessment which is using learning taxonomy as a guidance for monitoring students learning achievement.

6. Conclusion

This study have conducted a critical review based on ten recent proposed APAT (2014- 2019) by classifying its pedagogical and technical aspect of the tool. The findings of this study discovered that most these recent proposed APAT are very weak in the pedagogical aspect compared to its technical. Thus, for future studies of APAT, researchers should focus more on incorporating the pedagogical aspect especially the usage of rubric for grading and the application of learning taxonomy in highlighting students learning achievement.

As for the technical aspects of APAT, researchers should strive on proposing tools that uses hybrid analysis for its automatic evaluation feature as oppose to static and dynamic. This is due to the fact that the perks of having hybridanalysis as its automatic evaluation feature allows the complementary of the strengths and limitations between static and dynamic. Nevertheless, there are no studies highlighted the overall performances between these three analysis type which researchers should focus as well.

References

1. R. A. Mata-Toledo and P. K. Cushman, Introduction To Computer Science. Tata McGraw-Hill Publishing Company Limited, 2003, pp. 46.

2. B. Cheang, A. Kurnia, A. Lim, and W. C. Oon, “On automated grading of programming assignments in an academic institution,” Comput. Educ., vol. 41, no. 2, pp. 121–131, 2003.

3. C. Douce, D. Livingstone, and J. Orwell, “Automatic test-based assessment of programming,” J. Educ. Resour. Comput., vol. 5, no. 3, p. 4–es, 2005.

4. A. Mustapha, N. A. Samsudin, N. Arbaiy, R. Mohamed, and I. R. Hamid, “Generic assessment rubrics for computer programming courses,” Turkish Online J. Educ. Technol., vol. 15, no. 1, pp. 53–61, 2016. 5. J. Hollingsworth, “Automatic graders for programming classes,” Commun. ACM, vol. 3, no. 1, pp. 528–

529, 1960.

6. R. Romli, E. A. Abdurahim, M. Mahmod, and M. Omar, “Current Practices of Dynamic-Structural Testing in Programming Assessments,” J. Telecommun. Electron. Comput. Eng., vol. 8, no. 2, pp. 153– 159, 2016.

7. R. Romli, S. Sulaiman, and K. Z. Zamli, “Improving the Reliability And Validity Of Test Data Adequacy In Programming Assessments,” J. Teknol. (Sciences Eng., 2015.

8. R. Romli, S. Sulaiman, and K. Z. Zamli, “Designing a Test Set for Structural Testing in Automatic Programming Assessment,” vol. 5, no. 3, 2013.

(10)

A Critical Review on Recent Proposed Automated Programming Assessment Tool

893 9. T. Ahoniemi and T. Reinikainen, “ALOHA - A Grading Tool for Semi-Automatic Assessment of Mass Programming Courses,” Proc. 6th Balt. Sea Conf. Comput. Educ. Res. (Koli Call. 2006), no. February, pp. 139–140, 2006.

10. N. Yusof, N. A. M. Zin, and N. S. Adnan, “Java Programming Assessment Tool for Assignment Module in Moodle E-learning System,” Procedia - Soc. Behav. Sci., vol. 56, pp. 767–773, 2012.

11. Z. A. Al-Khanjari, J. A. Fiaidhi, R. A. Al-Hinai, and N. S. Kutti, “PlagDetect: A Java Programming Plagiarsim Detection Tool,” ACM Inroads, vol. 1, no. 4, p. 66, 2010.

12. M. Ghosh, B. Verma, and A. Nguyen, “An Automatic Assessment Marking And Plagiarism Detection,” Proc. First Int. Conf. Inf. Technol. Appl., pp. 489–494, 2002.

13. S. Parihar, “Automatic Grading and Feedback using Program Repair for Introductory Programming Courses,” in Annual Conference on Innovation and Technology in Computer Science Education, 2017, pp. 92–97.

14. J. W. Howatt, “On Criteria for Grading Student Programs,” SIGCSE Bull., vol. 26, no. 3, 1994.

15. . Fonte, D. Cruz, A. L. Gançarski, and P. R. Henriques, “A Flexible Dynamic System for Automatic Grading of Programming Exercises,” in OpenAccess Series in Informatics, 2013, pp. 129–144.

16. T. Tang, R. Smith, S. Rixner, and J. Warren, “Data-Driven Test Case Generation for Automated Programming Assessment,” in Annual Conference on Innovation and Technology in Computer Science Education, 2016, pp. 260–265.

17. S. Zougari, M. Tanana, and A. Lyhyaoui, “Hybrid assessment method for programming assignments,” Colloq. Inf. Sci. Technol. Cist, pp. 564–569, 2017.

18. P. R. Choudhury, N. Wats, R. Jaiswal, and R. H. Goudar, “Automated Process for Assessment of Learners Programming Assignments,” in International Conference on Intelligent Systems and Control: Green Challenges and Smart Solutions, 2014, pp. 281–285.

19. Y. Liang, Q. Liu, J. Xu, and D. Wang, “The recent development of automated programming assessment,” Proc. - 2009 Int. Conf. Comput. Intell. Softw. Eng. CiSE 2009, 2009.

20. P. Ihantola and O. Seppälä, “Review of Recent Systems for Automatic Assessment of Programming Assignments,” in Proceedings of the 10th Koli Calling International Conference on Computing Education Research, 2010.

21. K. M. Ala-mutka and K. M. Ala-mutka, “A Survey of Automated Assessment Approaches for Programming Assignments A Survey of Automated Assessment Approaches for Programming Assignments,” vol. 3408, 2007.

22. S. M. Arifi, “Automatic program assessment using static and dynamic analysis,” in Proceedings of 2015 IEEE World Conference on Complex Systems, 2015.

23. S. Zougari, M. Tanana, and A. Lyhyaoui, “Towards an automatic assessment system in introductory programming courses,” Proc. 2016 Int. Conf. Electr. Inf. Technol. ICEIT 2016, pp. 496–499, 2016. 24. S. Nutbrown and C. Higgins, “Static analysis of programming exercises : Fairness , usefulness and a

method for application Static analysis of programming exercises : Fairness , usefulness,” vol. 3408, no. June, 2016.

25. P. C. Technique, “The Design of an Automated C Programming Assessment Using Pseudo-code Comparison Technique.,” in National Conference on Software Engineering and Computer Systems, 2007. 26. D. M. Souza, K. R. Felizardo, and E. F. Barbosa, “A Systematic Literature Review of Assessment Tools

For Programming Assignments,” in Proceedings - 2016 IEEE 29th Conference on Software Engineering Education and Training, 2016, pp. 147–156.

27. O.-M. Foong, Q.-T. Tran, S.-P. Yong, and H. M. Rais, “Swarm inspired test case generation for online C++ programming assessment,” 2014 Int. Conf. Comput. Inf. Sci., pp. 1–5, 2014.

28. S. Srikant and V. Aggarwal, “A system to grade computer programming skills using machine learning,” Proc. 20th ACM SIGKDD Int. Conf. Knowl. Discov. data Min. - KDD ’14, pp. 1887–1896, 2014.

29. R. Romli, S. Sulaiman, and K. Z. Zamli, “Improving Automated Programming Assessments: User Experience Evaluation Using FaSt-generator,” Procedia Comput. Sci., vol. 72, pp. 186–193, 2015. 30. R. Romli, S. Sulaiman, and K. Zuhairi Zamli, “Test Data Generation Framework for Automatic

Programming Assessment,” in 2014 8th Malaysian Software Engineering Conference, 2014, pp. 84–89. 31. M. Schlarb, C. Hundt, and B. Schmidt, “SAUCE: A Web-Based Automated Assessment Tool for

Teaching Parallel Programming,” in European Conference on Parallel Processing, 2015, pp. 54–65. 32. A. Gerdes, B. Heeren, J. Jeuring, and L. T. van Binsbergen, “Ask-Elle: an Adaptable Programming Tutor

for Haskell Giving Automated Feedback,” Int. J. Artif. Intell. Educ., vol. 27, no. 1, pp. 65–100, 2017. 33. I.-H. Hsiao, “Mobile Grading Paper-Based Programming Exams: Automatic Semantic Partial Credit

Assignment Approach,” in European Conference on Technology Enhanced Learning, 2016, pp. 110–223. 34. P. E. Robinson and J. Carroll, “An online learning platform for teaching, learning, and assessment of

programming,” IEEE Glob. Eng. Educ. Conf. EDUCON, no. April, pp. 547–556, 2017.

(11)

programming assignments,” 2017 IEEE Pacific Rim Conf. Commun. Comput. Signal Process. PACRIM 2017 - Proc., vol. 2017–January, pp. 1–6, 2017.

36. D. Insa and J. Silva, “Automatic assessment of Java code,” Comput. Lang. Syst. Struct., vol. 53, pp. 59– 72, 2018.

37. W. Zhikai and X. Lei, “Grading Programs Based on Hybrid Analysis,” in Web Information Systems and Applications, 2019, pp. 626–637.

38. D. D. Stevens and A. J. Levi, Introductions to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Publishing, LLC, 2005.

39. A. Jonsson and G. Svingby, “The use of scoring rubrics: Reliability, validity and educational consequences,” Educ. Res. Rev., vol. 2, no. 2, pp. 130–144, 2007.

40. D. R. Krathwohl, “A revision of bloom’s taxonomy: An overview,” Theory Pract., vol. 41, no. 4, pp. 212–218, 2002.

41. R. Lister and J. Leaney, “First year programming: let all the flowers bloom,” Proc. fifth Australas. Conf. Comput. Educ., no. 1956, pp. 221–230, 2003.

Referanslar

Benzer Belgeler

• In a compiled implementation, the original program is translated into native machine instructions, which are executed directly by the hardware.. • In an interpreted

Yurt içi ve yurt dışında bir çok sergiye katılan ve ödüller alan İstanbul Aydın Üniversitesi Grafik Tasarımı Öğretim Görevli- si Medine Irak’ın 5.. kişisel

Betonun yapının şartlarını sağlayan bir rutubet ortamı oluş- turulması için standard bir prosedür yoktur. Ancak numu- nelerin saklanma koşulları kayıt altında

Bu sonuç- la; atefl, bo¤az a¤r›s›, halsizlik flikayeti ile baflvuran, sol kulak arkas›nda küçük lenfadenopatileri olan, periferik yaymada atipik lenfositler görülen

Il est certain que ceux qui étudieront l’œuvre poétique de Nazim Hikmet aussi bien que son théâtre pourront faire l’inventaire de cette richesse mais il

“Nietzsche!kendini!sanatçı!ve!mit;yaratıcı!olarak!görüyordu.!(bu,!onun!bazı! açılardan! bir! eleştirici! de! olduğunu! inkâr! etmek! anlamına! gelmez;! ama!

B unları ben yazdım , şiir sayılır sa ben de şair oldum dem ektir. K adlrcan K A

Kör Hüseyin Paşa ve ondan sonra tayin edilmiş olan Tavukçu Mustafa Paşa ve daha sonra gönderilmiş olan Ankebut Ahmed Paşa’nın Girit serdarlıkları