Assessment Reports: Feedback, Quality, and Closing the Loop
Academic Assessment Council Workshop, 3/25/2016
Some Uses of the Annual Assessment Report*
- Provide critical feedback and reflection on current levels of student learning in relation to Program Learning Goals and associated Student Learning Outcomes.
- Provide on-going documentation of assessment efforts. These records will be useful to department faculty, department chairs, and assessment coordinators to ensure continuity.
- Identifyachievements and areas for improvement in Student Learning Outcomes, by highlighting where students are achieving learning outcomes and understand why changes have been made and what was achieved by those changes.
- Reports present well planned assessment work, explain the process and choices, analyze data and put forth interpretations that guide future action, or closing the loop.
- Closing the loop can take the form of affirmation for meeting learning benchmarks and/or probing into learning gaps with the intent to improve student learning levels.
- How a program addresses any learning gap requires attention, discussion and decision making across stakeholders, it is a process in and of itself.
What Can We Learn from Assessment Reports?
- Has good quality assessment work been completed and/or where are areas for improvement?
- Can assessment work better capture what students can do or know at or near graduation?
- Where are opportunities for closing the loop and have they been addressed?
Report Evaluation Exercise
In the following exercise, we will evaluate 3 (brief) Annual Program Assessment Reports, using the standards noted above, to achieve the following goals:
- Identify and discuss elements of good quality assessment practice and areas for improvement
See Report Evaluation Criteria – separate slide
- Assess steps towards,and/or identify opportunities for closing the loop
Closing the Loop Exercise
Following the exercise, we will look more closely at Chico Examples of closing the loop in order to:
- Build understanding for closing the loop
- Identify opportunities to make improvement in assessment and/or student learning
- Consider what opportunities for closing the loop exist for participants in their own program
* CSU, Chico has two formats for annual assessment reporting, the Annual Program Assessment Report (APAR) and the Annual Program Assessment Status Update (APASU). Both of these reports are completed by September 30 for the prior academic year, for both undergraduate and graduate level degree programs. These report templates can be found at the Academic Assessment Council Site (
Program Alpha
We examined Outcome 3, Writing in the Discipline, using 10 papers written in our Capstone course. Several instructors who teach this course volunteered to apply the AAC&U VALUE Writing rubric to some of their students while grading, and they submitted students’ scores to the Assessment Coordinator who combined their results. (The rubric has five dimensions that were rated 1 to 4, so the lowest possible score was 5, and the highest possible score was 20.) We also collected these students’ self-ratings of their writing skills. After summarizing the results, the Assessment Coordinator was satisfied that our students write well. Students agreed.
Table 1 is a summary of our findings based on the Writing Rubric.
Table 1Score / Percentage
18-20 / 10
15-17 / 50
10-14 / 30
5-9 / 10
Table 2 is a summary of the students’ self-ratings.
Table 2Self-Rating / Percentage
I have serious problems communicating in writing. / 0
I need to improve my writing to communicate well. / 0
I write fairly well. / 20
I am an excellent writer. / 80
EVALUATION / Praise (+) / Constructive Criticism (-)
Overall Report- Is the report clearly written and reasonably complete? Based on the report, can you understand and evaluate what was done? Are important details missing?
Evidence- Did they collect reasonable evidence in reasonable ways? Was the sample representative and reasonably sized?
Assessment of the Evidence - Did they do it well? Did they apply a reasonable rubric or scoring system? Were readers calibrated? Were assessments reliable?
Use of the Evidence - Did they use a reasonable decision process and reach reasonable conclusions about student mastery of the outcome and how to close the loop, if needed? Have they followed through on previously reported plans to close the loop?
Program Omega
This year we assessed this outcome: Students who complete our program can explain concepts and theories in our discipline. Students in four upper-division courses completed an embedded final exam question. While each course required students to examine different terms, all the embedded questions followed this format:
Define four of the following five terms:
Responses from 100 students were randomly selected, and a team of six faculty assessed the evidence using this rubric:
Unacceptable / Needs Improvement / Acceptable / ExemplaryAll four of the definitions were inaccurate or incomplete. / Three of the definitions were inaccurate or incomplete. / Two of the definitions were in accurate or incomplete. / All of the definitions were accurate and complete
Here is a summary of our findings:
Score / PercentageExemplary / 21%
Acceptable / 46%
Needs Improvement / 19%
Unacceptable / 14%
We discussed results at the November 19 department meeting, and faculty concluded that too many of our students cannot adequately define terms in our discipline. We agreed that faculty who teach all our courses will devote class time to help students practice defining terms. This spring all faculty reported doing so in their courses, so we successfully closed the loop.
EVALUATION / Praise (+) / Constructive Criticism (-)Overall Report - Is the report clearly written and reasonably complete? Based on the report, can you understand and evaluate what was done? Are important details missing?
Evidence - Did they collect reasonable evidence in reasonable ways? Was the sample representative and reasonably sized?
Assessment of the Evidence - Did they do it well? Did they apply a reasonable rubric or scoring system? Were readers calibrated? Were assessments reliable?
Use of the Evidence - Did they use a reasonable decision process and reach reasonable conclusions about student mastery of the outcome and how to close the loop, if needed? Have they followed through on previously reported plans to close the loop?
Program Delphi
This year we assessed Outcome 2, Students can think critically about issues in our discipline. We collected evidence in our Capstone course last spring by requiring students to write a paper in which they explore an important issue in our discipline. Students chose their own topics, but the instructor approval was required. Students were given the AAC&U VALUE critical thinking rubric in advance and were told that part of their grade would be based on the quality of their critical thinking, as defined by the rubric. We collected essays (n=157) in all sections of the capstone course, and we randomly selected 50 of them for assessment.
Eight faculty volunteers assessed the essays using the rubric, with two faculty independently assessing each artifact. We first calibrated and inter-reliability for each scale was at least .80 (range was .80 to .91). At the end of the scoring session the involved faculty agreed that the rubric appeared to reasonably assess critical thinking.
Results were summarized (see figure below) and eight faculty who scored the artifacts reached consensus that students performed at acceptable levels for Explanation of Issues, Influence of Context and Assumptions, and Student’s Position, but did not meet their expectations for Evidence and Conclusions. They recommend to the faculty that they seek the help of campus faculty development director to get advice about how to improve students’ use of evidence and ability to reach conclusions.
Dimensions / Level 1 / Level 2 / Level 3 / Level 4Explanation of issues / 0% / 14% / 70% / 16%
Evidence / 5% / 25% / 68% / 2%
Influence of context and assumptions / 0% / 4% / 80% / 16%
Student’s position (perspective, thesis/hypothesis) / 0% / 8% / 60% / 32%
Conclusions and related outcomes (implications and consequences) / 26% / 38% / 34% / 2%
The faculty development director suggested several possible pedagogical changes, and the faculty decided to add problem-based learning to the four courses that share responsibility for developing students’ critical thinking skills. With the director’s assistance, the six faculty who teach their courses met several times in November, and this spring they are pilot testing a project-based learning assignment in each course, and they have agreed to integrate the AAC&U critical thinking rubric into the grading of these assignments. They plan to meet again at the end of the semester to discuss what they learned about using this pedagogy and the impact it had on students’ critical thinking.
EVALUATION / Praise (+) / Constructive Criticism (-)Overall Report - Is the report clearly written and reasonably complete? Based on the report, can you understand and evaluate what was done? Are important details missing?
Evidence - Did they collect reasonable evidence in reasonable ways? Was the sample representative and reasonably sized?
Assessment of the Evidence - Did they do it well? Did they apply a reasonable rubric or scoring system? Were readers calibrated? Were assessments reliable?
Use of the Evidence - Did they use a reasonable decision process and reach reasonable conclusions about student mastery of the outcome and how to close the loop, if needed? Have they followed through on previously reported plans to close the loop?