Improving Engineering Graduate Education through Outcomes Assessment

Graduate education, especially that leading to the Ph.D. degree is administered very differently from undergraduate education, especially in engineering. Undergraduate engineering programs have a detailed set of required courses with a strict prerequisite structure and required culminating design experiences in addition to a senior thesis. There are published “bodies of knowledge” that describe the expectations for graduates of programs that are called “X” and external accreditation at the program level as a prerequisite for professional registration. Graduate programs, on the other hand, are often tailored specifically and individually for each research niche and indeed for each student. A Ph.D. advisory committee, unique for each student, oversees the students progress towards the degree, but the main responsibility for guiding and evaluating student learning lies with the adviser. An adviser mentors each student individually towards their production of new knowledge that is subject to external review and acceptance by the appropriate global technical community.

Outcome assessment has long been a part of undergraduate engineering education; engineering accreditation criteria have called for direct assessment of learning outcomes for more than a decade. Graduate engineering programs are not accredited and hence assessment has been required only relatively recently as part of regional accreditation at the university level. The view of graduate programs as fundamentally different from undergraduate programs, specifically because of the individualization and the role of the adviser in establishing the success criteria and determining completion, inhibited the acceptance of assessment of learning outcomes.

To address this reluctance to establish and implement assessments of learning outcomes in graduate programs, a set of generic learning outcomes and common evaluation points have been developed that are applicable across the set of programs. There are 30 graduate programs in SEAS, three in each of 10 disciplines. These three are generally the non-thesis master’s, the Master of Science and the Ph.D. Common aspects of these programs were explored and learning goals and assessment tools were developed for these common aspects. The use of the assessment tools was tied to milestones in each program (for example qualifying examination, proposal presentation, thesis defense) so that the completed assessment form must accompany the signed form attesting to successful completion of the milestone. Each program customized the learning goals and assessment forms for their program-specific qualifying exam; all programs used the same assessment forms for evaluating proposal presentation and thesis defense. Thus all 30 graduate programs began measuring and evaluating specific learning outcomes. The data is collected and summarized centrally; the results are returned to the program for interpretation and analysis.

The use of these learning goals and assessment tools has impacted the programs in several important (and perhaps unexpected) ways. In the computer engineering program in particular (the presenter is the Director of Computer Engineering Programs), several tangible improvements can be directly tied to the use of the assessment tools and results. At the qualifying exam, a relatively low pass rate was disappointing to faculty and students alike. Before the use of the assessment tools it was difficult to generalize as to the cause or propose remedies. When we started using the assessment tools it became clear that the most common problem that students faced was the inability to adequately appreciate the limitations and scope of a research contribution. When faced with questions about whether a research contribution was applicable in a different scenario than the one in which it has already been applied, students stumbled with both the question and the response. When the committee detected this as a common shortcoming, the faculty were asked to focus attention on this consideration. As a result, student performance improved dramatically. Further, the publication of the assessment form helped students better understand the goals of the examination and helped them focus their preparation. It provided students with a structure in which to evaluate research contributions (what is the problem? what are the success criteria? is the solution correct? are the evaluation criteria appropriate to the problem? where are the boundaries of applicability?) and helped them to organize their evaluations and discussions of contemporary research.

The assessment forms that are used for the research proposal presentation are common to all graduate engineering programs. Students and faculty alike have been using the assessment forms to help them prepare their research proposal; the result is that the proposals are better organized, they state their research goals and evaluation criteria more precisely and in general provide a clearer roadmap to guide the student’s research journey.