1Mersin University, Computer Education and Instructional Technologies Department

1Mersin University, Computer Education and Instructional Technologies Department

A COMPARISON STUDY: UNDERSTANDING EXPERTISE BASED TRAINING EFFECTS ON SOFWARE EVALUATION PROCESS OF MATHEMATICS EDUCATION PRE-SERVICE TEACHERS

Hatice SANCAR-TOKMAK1 LütfiİNCİKABI2

1Mersin University, Computer Education and Instructional Technologies Department

2Kastamonu University, Elementary Education Department

This study aimed to examine effects of Experience Based Training (XBT) on students’ educational software evaluation process by following comparative case study design. In this comparative case study, there were two cases and these two cases were designed differently in terms of teaching strategies by the researchers. In the first case, XBT-based instruction was designed for educational software evaluation process. In the second case, the traditional instruction was applied for the same process. The themes included in the Heinich’s et al. (2002) Software Evaluation Checklist were taken as reference to explain the cases’ software evaluation processes similarity to experts’. In this study, qualitative inquiry and analysis methods were employed.

A total of 43 Mathematics Education pre-service teachers who were novices on the evaluation of educational software and registered for Computer course participated to the study. Also, three experts were participated to the study.

Data collection in the study consisted of demographic questionnaire, pre-service teachers’ journals, Heinich’s et al. (2002) Software Evaluation Checklist, classroom observation form, and the case example.Data were analyzed by applying two main procedures. First, the similarities in marking between the same educational software according to the criteria in Heinich (2002)’s evaluation form were compared. Second, the data collected from XBT-group and traditional group teachers via journals and in-class observations were organized into common themes separately as stated by Patton (1990) and compatible themes were merged. Then, two groups’ educational software evaluation process similarities and differences with regard to the defined themes were compared.

The results showed that XBT group members, worked individually, defined the detailed criteria for each criterion and then discussed to detailed criteria formed by each group members and finally, restructured a unique detailed criteria list. The traditional group did not indicate such efforts to define detailed criteria list since they did not try to define detailed criteria for each criterion; however, they searched the meaning of unknown criteria in the checklist from internet.

For both groups, searching literature to clarify the meaning of each criterion was missing. Although the XBT group tried to give detailed criteria for each criterion in the checklist, some of their detail criteria did not properly cover the related criterion, which might cause decrease in agreement with the experts’ rates. The XBT groups also differed from the traditional ones in terms of grading the educational software. The XBT groups determined measures for each main criterion according to detailed criteria and their impressions whereas traditional groups graded the educational software solely based on their impressions.

The XBT group presented higher agreement rate in evaluation of each educational software based on Heinich et al.’s (2002) checklist with experts than traditional group. Most time, both groups agreement with experts were for high rates while expert-XBT groups’ agreed items presented a closer distribution among the scales of ratings.

Keywords: Expertise-Base Training, Mathematics Teacher Candidates, Software Evaluation, Novices

SOURCES:

Heinich, R., Molenda, M., Russell, J. D., & Smaldino, S. E. (2002). Instructional Media and Technologies for Learning (7th ed.). Upper Saddle River, NJ: Prentice-Hall.

Patton, M. Q. (1990). Qualitative research and evaluation methods (2nd ed.). Newbury Park, CA: Sage.