International Scientific Conference eRA-6

ICT-2 Session

1. Measuring Student Satisfaction Using Multicriteria Approach: The Case of Automation Department at TEI Piraeus

E. Sklavounou, A. Spyridakos, G. Psaromiligos,C. Iliopoulos

Evaluation in higher education

Many definitions have been given through time for the term “evaluation” and each one of them includes the three dimensions of it which are “process”, “outcome” and “impact evaluation”.

Rossi and Freeman (1993) dive a definition for evaluation as "the systematic application of social research procedures for assessing the conceptualization, design, implementation, and utility of ... programs”.

Various forms of evaluation are the following:

  1. Quantitative Evaluation: “Quantitative evaluation is an assessment process that answers the question, "How much did we do?"”
  2. Qualitatitive Evaluation: “Qualitative evaluation is an assessment process that answers the question, "How well did we do?"”
  3. Formative Evaluation: This type of evaluation is used to improve programs, promote improvements and includes techniques that are used in other types of evaluation such as interviews, surveys, data collection and experiments.
  4. Summative Evaluation:“Summative evaluation refers to the assessment of the learning and summarizes the development of learners at a particular time”.

Evaluation in Higher Education Institutes (HEIs) in Greece started in 2005 and has as purpose to outline any weaknesses and declinations from their mission statement. It takes place in two stages. In the first one, known as internal evaluation, academic staff and students participate, filling out questionnaires, doing interviews and discussions. Then a report with a critical evaluation is created, with advantages and disadvantages, and with steps that need to be followed to achieve the goals, improve quality of research, teaching and other academic work.

Then the external evaluation is conducted by a committee of independent consultants, doing discussions with the staff and the students and takes under consideration the internal report. The members of the committee are scientists that are irrelevant with the HEI. The results are gathered in the external evaluation report which is forwarded to the academic unit and to ADIP.

Quality assurance in higher education

Quality Assurance in Higher Education is the activity that aims to maintain and raise quality and also guarantee the improvement of quality standards to meet the needs of students. In the greek area there is ADIP, an agency that is financed and supervised by the Ministry of Education. This agency guarantees transparency of the evaluation process, supports the HEIs to follow the procedures to assure and improve quality, updates the government and the HEIs on the international evolutions and trends. It also coordinates and organizes the evaluation processes, specifies the form of the evaluation reports and keeps a database of the evaluation of all the HEIs.

In the European Higher Education Area (EHEA) there is the European Network for Quality Assurance (ENQA) for the cooperation of all european Quality assurance agencies, which was established in 2000. With the Bologna declaration in 1999 the European Standards and Guidelines (ESG) were set which were created to be applied in every HEI and QA agency without annulling their autonomy. They have three sections, one about internal QA, one about external QA and one about QA agencies.

Student satisfaction

Student Satisfaction is a multidimensional term. Parasuraman et al. (1985, 1988), stated that a student is satisfied when the performance meets or exceeds his or her needs and expectations and on the other hand dissatisfaction occurs when there is a difference between expectations and provided services. Expectations are created by the needs, by what the student has heard, by post experiences and by the promises given by the institution. Student satisfaction measurement is a problem that can be solved with techniques like the SERVQUAL, the SERVPERF, the Student Opinion Survey (SOS), the Student Satisfaction Inventory (SSI), the Educational Benchmarking Incorporation (EBI) and is called Undergraduate Business Exit Assessment (UBEA). Finally there are the MUlti-criteria Analysis methods.

MUlti-Criteria Analysis can be considered as a clear and open method that gives an outcome about the performance so that it will not be necessary by the decision making team. The weights that are used and the results that derive are re-feededconstitute a control route and generally it is a method can be a means of communication between the decision making group and the society. One of the main and most important features of the MCA method is the performance matrix, where the rows describe the options and the columns describe the impact of the options. The dimensions of the matrix are mxn with m: the number of options and n: the number of the options’ impacts. Usually the mxn cells of the performance matrix are numerical, but they can also be bullet points or color coding.

With the MUlti-Criteria Decision Analysis methods, after the problem is broken down into pieces, we can calculate the success level of the options on the goals, calculate the graduation of the goals and after that composite the pieces of the problem. All those are conducted using the appropriate software that is developed on the MCDA method.

The MUSA (Multi-criteria Satisfaction Analysis) methodology is a multicriteria approach in aggregation and disaggregation of satisfaction measurement problems from the provision of services, the purchase of products, etc. It is based on the multicriteria decision analysis method, considering the principals of analytical and compositional approach and the value system theory. MUSA method is based on the hypothesis that total satisfaction of an individual customer from the group of customers, depends on a number of variables that express the features of the provided service or product.

The stages for implementing the MUSA method are:

  1. Design of the questionnaire
  2. Filling of the questionnaire by the customers
  3. Preliminary control of the collected data to ascertain if the answers given by the students have a logical consequence
  4. After the preliminary control, the data is inserted in the appropriate MUSA software.
  5. The results are obtained that concern customer satisfaction and include: the weights, the average satisfaction index, the average demanding index, the total satisfaction function and the average effectiveness index.

Design of the research, data collection and results

Students' satisfaction can be assessed with emphasis to be given on their satisfaction by several criteria, which are presented below. They are broken down in several sub-criteria in order to capture satisfaction and dissatisfaction of students in a better and bore complete way. These are:

  1. The course: it concerns the course’s objectives
  2. Lecturer: it refers to the lecturer’s quality.
  3. Teaching assistant: it considers the quality of the lab’s practice.
  4. The laboratory practice: it refers to the laboratory’s equipment.
  5. Student: it refers to the students of the institution that participate in the research.

Feedback is collected using a questionnaire, easy to understand and complete which was distributed in the breaks between classes and the time to complete the questionnaire was about 10-12 minutes.

The platform that was used is a part of a site for operational research and allows the user to conduct a research with the criteria and the sub-criteria he chooses. Our five criteria are Course Delivery, Course Material, Course Structure, Evaluation and Teaching and each one has its own sub-criteria. The first one has four, the second one has also four, the third and the fourth have three each and the fifth contains six sub-criteria. All sub-criteria have a 5-point scale for assessment, as the overall criteria satisfaction and the global satisfaction do. These sub-criteria are drawn from the most important questions that are included in the questionnaire. Specifically they are based on questions regarding the Course and the Lecturer, sections A and B of the questionnaire.

The number of completely filled questionnaires is 156 and the results that derive with the help of the MUSA platform are the following.

  • Weights

The satisfaction criteria weights suggest the relative degree of importance that gives the customers’ total in the values of satisfaction dimensions that have been set. This fact suggests that the decision for the “importance” for each criteria, it also depends from the number of criteria been used. As illustrated below, the highest weight have Course structure with high deviation from the others. Then Course delivery and Course material are following with the same value of 20.00. Finally, Teaching and Evaluation have the lowest weights, with very small deviation from each other (8.24 and 8.19 respectively).

Figure 1:Weights

  • Satisfaction

The following graphic represents the percentage of students’ answers about satisfaction which is 82.75%. The highest percentage is 90.31 that refer to Course structure. Then Course delivery and Course material are following, with small deviation from each other, and their percentages are 84.36 and 82.05 respectively. Students have the lowest satisfaction on Teaching and Evaluation as it seems on their percentages that are 65.94 and 56.49 respectively.

Figure 2:Satisfaction

  • Demanding

The following graphic represents the total demanding of the students that is -59.87%. As illustrated below students has high demanding on Teaching (-1.98) that its value is the closest to zero. With a small deviation Evaluation is following with the value of -2.57. Students show low demand on Course delivery and Course material with the values of -60 respectively. The lowest demanding is on Course structure with -81.76.

Figure 3:Demanding

  • Action diagram

The combination of the satisfaction criteria’s weights with the average satisfaction indicators is the deriving of a series of action diagrams. These diagrams can specify which are the strong and which the weak points of student’s satisfaction, as also where the efforts must focus in order to achieve improvement.

Every action diagram separates in quadrants according to the performance and the weights of the criteria:

  1. Status quo (low performance and low importance): it is the third quadrant of the diagram and usually there is no demand for any additional action from the company, because the specific satisfaction dimensions are not important according to the customers. In this research as it seems below this area is empty. This area is the third priority of the business since in the specific quadrant belong important criteria for which the customers are not satisfied.
  2. Power area (high performance and high importance): it is the first quadrant of the diagram and the characteristics that belong to that quadrant can be used as the comparative advantage of the company toward to competition. In most cases, the specific satisfaction dimensions are the basic cause that the considered product or service has been chosen and are the second priority of the business, especially when there is room for improvement.
  3. Action area (low performance and high importance): it is the fourth quadrant and in this quadrant of the diagram where belong the most critical characteristics that must be definitely improved in order to increase the satisfaction level of the customers. This area is the first priority of the business, since in the specific quadrant belong important criteria for which the customers aren’t satisfied.
  4. Transferring funds area (high performance and low importance): it is the second quadrant of the diagram and the funds and generally the effort of the business that concerns the specific characteristics of the product or the service can be used with different way. This area is the last priority of the business because it includes characteristics that not only are not important for the customers but also the performance of the company is high.

As shown in the diagram below, there are two characteristics in the status quo which are Teaching and Evaluation while action area has none. Power area has one characteristic that is Course structure and the other two characteristics are exactly on the vertical axis between this area and the transferring funds area.

Figure 4: Action diagram

  • Improvement diagram

Although, the action diagrams show the satisfaction dimensions must Be improved, they cannot define the result of the improvement measures nor the extend of the effort that needs in order to achieved the expected improvement.

This problem is solved with the “improvement diagrams. As shown in the diagram below, the quadrant with the high effectiveness and low effort (first priority) has three characteristics that are Evaluation, Course material and Course structure. The quadrant with the low effectiveness and low effort (second priority) has two characteristics which are Teaching and Course Delivery. Finally, the quadrant with the high effectiveness and high effort (fourth priority) and the quadrant with the low effectiveness and high effort (third priority) have no characteristics.

Figure 5: Improvement diagram

Conclusions and actions to be followed

From the research conducted in Automation Department of TEI Piraeus, using a number of 156 questionnaires, we can assume that the most important factor as it is perceived by the students is the Course Structure and by giving more resources and effort to improve it the overall satisfaction of the students will be raised, making a noticeable difference. As it is presented the following two criteria that are valued by the students are Course Material and Course Delivery. Students believe that they are important for them and the institution could promote overall improvement by improving them. As for Evaluation and Teaching students think that they have not such an important role to their studies and following to their post-graduate career development.

The analysis of Satisfaction showed that Course Structure keeps student satisfied but also the other two criteria that follow (Course Material and Course Delivery) score a high percentage. Here we see that students are not completely satisfies within the Teaching and Evaluation in the Department, but as mentioned above they don’t mind because they think these are of secondary importance.

The Demand analysis showed that students show low demand for Course Delivery and Course Material with the lowest demanding on Course structure. It is obvious that they demand a better Teaching and a better Evaluation system. It is known that the higher the value of the demand is, the more satisfaction has to be improved to meet students’ expectations.

The action diagram showed that Course Structure is located within the first quadrant while Course Delivery and Course Material are on the vertical axis. The location of these three criteria in this area means that efforts could be made in order to improve more the satisfaction deriving from them and be the department’s competitive advantage. As for Teaching and Evaluation, they are located in the status quo area and no alterations have to be made because it would be meaningless.

Finally, the improvement diagram, that shows the impact that alterations would have if were made, we see that the department has to focus on changing the provided services on Evaluation, Course Material and Course Structure, changes that would have immediate affect and can be made with minimum effort because students are not demanding. As for Teaching and Course Delivery, they belong in the second priority area, meaning they are satisfaction dimensions with low demand or high improvement index. So, the department even if it changed these sectors, the overall satisfaction would not be very affected.

References

  1. Grigoroudis E. & Politis Y., “Employees satisfaction: Methological approaches, measurement dimensions and new technologies”, Proceedings of the Qualifying Labour, Social Issues and New Technologies International Conference, Crete, 2005.
  2. Betz E. L., Menne J. W., Starr A. M., & Klingensmith, J. E., “A dimensional analysis of college student satisfaction”, Measurement and Evaluation in Guidance, 1971.
  3. Upcraft M. L., & Schuh J. H., “Assessment in student affairs: A guide for practitioners”, San Francisco, JosseyBass, 1996.
  4. Grigoroudis E. and Y. Siskos, “Preference disaggregation for measuring and analysing customer satisfaction: The MUSA method”, European Journal of Operational Research, 2002.
  5. Psaromiligkos Y., Retalis, “Re-Evaluating the Effectiveness of a Web-based Learning System: A Comparative Case Study”, International Journal of Educational Multimedia and Hypermedia, 2003.
  6. Kytagias D., Psaromiligkos Y., Spyridakos A., Steganou B., Kytagias C., Lalos P., Dimakopoulos N., (2004), “An Integrated System for the Tradional & Distance Learning at TEI Piraeus”, Proceedings of 4th Hellenic Conference with International Participation “Information and Communication Technologies in Education”, Athens, 2004.
  7. [7] Koilias Chr., “Evaluating Students' Satisfaction: The Case of Informatics Department of TEI Athens”, Operational Research, An International Journal. Vol.5, 2005
  8. “Multi-criteria analysis: a manual”, Department for Communities and Local Government, London, 2009.
  9. C. Zopounidis and P. Pardalos, “Handbook of Multicriteria Analysis”, 1st Edition 2009, ISBN: 978-3-540-92827-0, 2010
  10. Keeney, R. L., & Raiffa, H. (1976) Decisions with Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, reprinted, Cambridge University Press, 1993.
  11. Keeney, R. L. and D. von Winterfeldt (1988) “The analysis and its role for selecting nuclear repository sites”, Operational Research ‘87. G. K. Rand, Elsevier Science Publishers B.V. (North Holland).
  12. Regan-Cirincione P., “Improving the accuracy of group judgment: A process intervention combining group facilitation, social judgment analysis, and information technology”, Organizational Behavior and Human Decision Processes, 58, 246–70, 1994.
  13. Phillips L. D., “A theory of requisite decision models”, Acta Psychologica, 56, 29–48, 1984
  14. Keeney R.L., and Raiffa H. (1976), “Decisions with Multiple Objectives: Preferences and Value Trade-offs”, Wiley, New York, reprinted, Cambridge University Press, 1993.
  15. Clemen R.T., “Making Hard Decisions, an Introduction to Decision Analysis second edition”, Duxbury Press Belmont, CA, 1996.
  16. Chen S.J. and Hwang C.L., “Fuzzy Multiple Attribute Decision Making: Methods and Applications”, Springer Verlag, Berlin, 1992.
  17. Cluster Analysis: Basic Concepts and Algorithms.
  18. Bennett, S. N. (1975). Cluster analysis in educational research: A non-statistical Yannis Siskos, Nikos Tsotsolas and Nikos Christodoulakis, “Data Set Generator for Customer Satisfaction Surveys”, University of Piraeus, Greece.
  19. Evangelos Grigoroudis and Yannis Siskos, “MUSA: a Decision Support System for Evaluating and Analyzing Customer Satisfaction”, Technical University of Creteand University of Piraeus, Greece

5. Analog and Digital Modulation: A practical approach