Assessment Cohort

University of Nebraska-Lincoln

Description of Program

The UNL Assessment Cohort program consists of 18 hours of graduate level courses and targets experienced practicing teachers. The program’s goals are to increase the assessment literacy of classroom teachers, to improve classroom assessment practices, and to prepare teachers for leadership roles at either building or district levels. The need for teachers to be leaders in assessment is partially driven by the State of Nebraska’s assessment program. This program requires all districts to develop and report the results of local assessments of standards in four core academic areas. While the focus of the program is broader than classroom assessment, classroom assessment is a certainly an important central feature.

Instructional Objectives

The 18 hours of courses consists of two six-hour courses offered in consecutive summers with six hours of practicum during the intervening school year. This provides participating teachers an opportunity to put the assessment knowledge and skills they have acquired into practice. By providing practicum opportunities as part of the program, participants have the opportunity for practice, in an environment that allows for feedback and dialogue. Past experience and research suggests that the internalization of assessment knowledge is facilitated when the learner has an opportunity to implement the knowledge in a meaningful and appropriate environment.

The first six-hour course covers basic assessment concepts applied in both classroom and large-scale settings. The resources used in this course include Student-Involved Classroom Assessment authored by Rick Stiggins, Practice with Student-Involved Classroom Assessment authored by Judy Arter and Kathleen Busick, articles from professional literature, and a wide variety of web-based materials. The goals for this course are as follows:

Participants will gain the knowledge and skills necessary to:

  • Develop assessments to fit into specific contexts (knowing why you are assessing)
  • Develop assessments that reflect the specific achievement targets students must master (knowing what you are assessing)
  • Select from among a variety of assessment methods to fit the context (knowing how you are assessing)
  • Sample student achievement efficiently and effectively (knowing how much you need to assess)
  • Control for relevant sources of bias (knowing how to control accuracy)
  • Rely on student involvement to motivate students (knowing how to involve students in the assessment process)

The concepts covered in this course are listed below.

  • How assessment fits into the big picture
  • Users and uses of assessment
  • The importance of achieving balance in assessment
  • The importance of student involvement in assessment
  • Attributes of quality assessment
  • Standardized testing, including technical qualities (reliability, validity, norms, standard setting)
  • Achievement targets and assessment methods, including specific information about each type of target (reasoning, skills, products, and dispositions) and assessment method (selected response, essay, performance, and personal communication) and information about the match between targets and methods
  • Overview of Nebraska’s assessment plan
  • Grading and report cards
  • The use of portfolios to facilitate assessment
  • Communicating assessment results

The second six-hour course focuses on analyzing and interpreting assessment data and data-based decision-making. The resources used in this course include Student-Involved Classroom Assessment authored by Rick Stiggins, a PC-based statistical analysis software package (SPSS), and a variety of materials from both the professional literature and the web. The goals for this course are as follows:

Participants will gain the knowledge and skills necessary to:

  • Identify what data needs to be collected and analyzed to provide sufficient evidence of the quality of an assessment
  • Identify what data is needed in various contexts to support decision-making (includes decision-making at a variety of levels including classroom, building, district, and state).
  • Develop an assessment plan to generate the types of data needed
  • Use software to run appropriate statistical analyses of data for specific purposes
  • Accurately interpret the results that are generated from a variety of statistical analyses
  • Identify the decision-making implications of various types of data (includes decision-making focused on developing sound assessments, instructional planning and interventions, and program evaluation).

The concepts covered in this course are listed below.

  • Organizing data
  • Basic SPSS programming
  • Basic descriptive statistics (central tendency, variability, correlation)
  • Item analyses (difficulty and discrimination indices)
  • Making judgments about sampling (representation and adequacy of coverage)
  • Estimating reliability and gathering evidence of validity
  • Issues associated with assessment bias, including conducting basic judgmental and empirical analyses of bias
  • Basic inferential statistics (t-tests and analysis of variance).
  • Communicating results
  • Using results to inform decision-making

Instructional Interventions

The instructional materials that are used in each course are described above. In addition, teacher-participants are encouraged to identify and use additional resources that are tailored to their specific interests and needs. Instructional responsibility is shared in this course, with teacher-participants having numerous opportunities to share their ideas with the rest of the class. Technology also played a key role in the instruction. The teacher-participants were introduced to software and were required to make presentations and products that required them to use the computer. Students had access to the computer throughout the course. In addition, a course-site was established using Blackboard that permitted participants to receive updated course materials, and communicate with peers and faculty.

Each teacher-participant is expected to maintain an electronic portfolio during the course of the entire program. One major component of each portfolio is the projects that participants complete during each six-hour course. These projects are tied to the topics presented in class. The projects are hands-on applications of assessment concepts to authentic educational problems or challenges. A few examples of the projects are: (1) an evaluation of a commercially available standardized test and a classroom assessment, (2) redesigning a district report card, (3) writing a response to a newspaper article about assessment, (4) designing a parent pamphlet on assessment, (5) developing an assessment plan for a unit of instruction, and (6) analyzing the reliability of a district-level assessment. In general, the projects are designed to allow the participants to tackle the assessment challenges that they are currently facing in their professional lives. Teacher-participants have the opportunity to share each of their projects with the larger group resulting in an exchange of knowledge and ideas among the presenter and members of the larger group. This process allows all group members to enhance their learning. In addition to the projects, the portfolios also include a variety of self-assessments, self-reflections, summaries of resource materials, communications with other participants, reactions to materials and reflections on discussions that occurred during class.

Individual teacher-participants determine all of the projects associated with the practicum experience. Projects could focus on classroom assessment applications or district assessment applications depending on the current need of each participant. Each teacher-participant is required to keep a record of each project that documents the project. This record includes a description of the nature of the project (including goals), a plan for how the project will be conducted, the collection and analysis of appropriate data, and conclusions. Many of the projects actually result in a specific product, such as a revised classroom assessment system or a district-level assessment.

Evidence of Effectiveness

The assessment of teacher-participant learning in this course is heavily student-involved. In addition to portfolio evidence, self-reflective instruments were used to determine the effectiveness of instruction. These instruments were located online and the data was password-protected.

There were three instruments adapted by Dr. Delwyn L. Harnisch for electronic use. These are, the Self-Assessment Development Levels, Assessment Competencies Knowledge Rating Scale, and the Classroom Assessment Confidence Questionnaire.

The Self-Assessment Development Levels was used as a pre/post instrument to measure the student's perceived competency in key areas related to the course objectives: These are identifying and establishing clear and appropriate learning targets,; understanding the users and uses of assessment; matching assessment methods to targets; sampling; and eliminating potential sources of bias and distortion. The instrument asked the students to rate their knowledge from "Beginner" to “Skilled.”

The data showed that the teacher-participants felt that they had developed in each of the competencies. In the area of identifying and establishing clear and appropriate learning targets, Over 53% of the students rated themselves as "skilled" at the conclusion of the first course compared to 8.7% at the beginning. There was also a dramatic increase in the area of understanding the users and uses of assessment with 60% rating themselves as "skilled" compared to 0% at the beginning of the course. In the area of matching assessment methods to targets; students were in either the "practiced" or "skilled" areas. This means that they were trying many times of assessments, but they still may need some "fine tuning." This was the area that showed the least amount of change over the duration of the course. Finally, in the areas of sampling, and eliminating bias and distortion, most students rated themselves in the "practiced" or "skilled" areas.

A second instrument that was used was the Knowledge Proficiency. The graph below summarizes the mean ratings of the students in each of the areas.

Assessment Knowledge Rating Scale

Overall, the students seemed to feel that they could converse in general ways fairly well in each of the areas and give "expert advise" in designing and developing classroom assessment and in the area of conferencing.

Finally, the Classroom Assessment Confidence Questionnaire was given at the conclusion of the first six-hour course. This was an open-ended questionnaire that asked students to reflect on their experience. There were five skills that students felt that they had learned. These were separating skills from learning behaviors; creating classroom performance situations based on learning objectives; focusing assessment methods; interpreting results to students and parents; and matching curriculum targets to assessments. The teachers also set some goals for themselves that included involving the learners more in the assessment process and being a leader in curriculum and assessment.