Annual Initial Licensure Program Assessment Report

Elementary Education, Undergraduate and MEd Programs

June 15, 2012

Date of Meeting: June 6 and April 26 (also throughout May via email). The Program Committee faculty listed

below have provided expertise for the primary content areas; all have been directly involved in

data gathering and/or data analyses for one or more of the CAEP assessments.

Participants/Role:

Paul Cobb, Program Committee Member—Mathematics

Amanda Goodwin, Program Committee Member—Literacy

Clifford Hofwolt, Program Committee Member—Science

Deborah Rowe, Program Committee Member—Literacy

Emily Shahan, Program Committee Member—Mathematics

Lanette Waddell, Program Committee Member—Mathematics

Kathy Ganske, Program Director

Program Progression

1.  2011-2012 Undergraduate Screening I and Formal Admission to the Program: Students were formally admitted to the Undergraduate Elementary Education Program during both fall 2011 and spring 2012 semesters. During 2011-2012, 36 students (fall n = 19; spring n = 17) applied for formal admission through the Screening I process; 35 of these were admitted (fall n = 18; spring n = 17). One fall student, who demonstrated numerous document dispositional issues, changed majors and did not complete the Screening I process.

Students are expected to apply to the Undergraduate Elementary Education Program, through the Screening I process, no later than fall of junior year. This year students were encouraged to make application as sophomores (spring) in order to provide the Program with an earlier in-depth review of their performance and dispositions. During fall 2011, a revised process for Screening I was put into place. In addition to students’ formal applications, faculty feedback regarding academic performance and dispositions, and Program Committee consideration of the student’s potential to be successful in the Program, applicants were also required to participate in a panel review process. The panel reviews, replaced the previously used one-to-one interviews with the student’s advisor or Program Director. For each of the four 10-minute panel reviews, students were asked to respond to a scenario or question presented to them in one of the following four categories: Professionalism and the Profession; Collaboration with Colleagues and Families; Diversity; and Development of Critical Thinking. Responses in each area were scored by a team of evaluators comprised of a faculty member and a school teacher or instructional coach. Each rubric (4-point scale) included an element related to the topic and an element related to Communication. Average scores on the rubric areas evaluated ranged from 2.3 (Diversity) to 2.8 (Collaboration with Colleagues and Development of Critical Thinking). Individual student averages ranged from 2.2 to 3.1, with an overall average of 2.6.

A similar process was followed during spring 2012, with the exception that the number of mini-panels was reduced to three by consolidating Professionalism and the Profession and Collaboration with Colleagues and Families. In addition, effort was made to ensure that evaluators represented expertise across the four content areas: literacy, math, science, and social studies. Average scores on the rubric areas, including Communication, ranged from 2.78 (Professionalism and Collaboration) to 3.04 (Critical Thinking, and Diversity). Individual student averages in the spring ranged from 2.29 to 3.67, with an overall average of 2.96. Considering that students complete the Screening I process early in the sequence of program courses and field experiences, ratings of 2 (emergent) are expected. In general, students exceeded expectations.

2.  Candidates Admitted to the MEd in Elementary Education, Plus Licensure Program during 2011-2012. During the spring 2011, 27 applicants to the MEd Program were admitted to the Program; 17 subsequently accepted and formed the 2011-2012 cohort. During spring 2012, of the 33 admits, 17 accepted; this group will comprise the new cohort for 2012-2013.

3.  Undergraduate Elementary Education Screening II for 2011-2012: Screening II is a review process that teacher candidates undergo to ensure that they are ready to student teach the following semester. Twenty-four undergraduate candidates applied for Screening II during 2011-2012 (n = 15 fall; n = 9 spring). Of the 24 candidates, 21 were fully approved and either student taught during spring 2012 or will student teach in fall 2012. One candidate in the fall, who demonstrated a pattern of numerous dispositional issues, changed majors and did not complete the Screening II process. One spring candidate chose to not complete the process at this time; the other candidate experienced difficulties in a spring course and intends to retake the course in the fall and then student teach in the spring. Her file will be reviewed again in the fall to ensure improved performance in the course being retaken.

4.  MEd Elementary Education, Plus Licensure Screening II for 2011-2012: Seventeen MEd candidates applied for Screening II; all 17 were approved to student-teach during fall 2012.

5.  Undergraduate and MEd Candidates Successfully Completing Student Teaching: Thirty-five candidates successfully completed student teaching during 2011-2012. Twenty of these candidates were in the Undergraduate Elementary Education Program (n = 6 fall and n = 14 spring); the other 15 (fall) candidates were in the Masters in Elementary Education, Plus Licensure Program. Three of the undergraduate students were Dual Majors (1 in the fall and 2 in the spring). These students completed just one placement of student teaching in Elementary Education. The other placement was in a Special Education setting.

Candidate Performance on Key Assessments

1.  What Do the Data from Key Assessments and Dispositions Indicate about Candidates’ Ability to Meet Standards? Because both the Undergraduate and MEd Programs are initial licensure programs, essentially the same assessments are defined for CAEP; in addition to CAEP requirements, they reflect the needs of the Association for Childhood Education International—ACEI to address all content areas, especially the primary content areas of Literacy, Mathematics, Science, and Social Studies. In addition to dispositional data, the following assessments are used as to assess candidates: four Praxis II exams, the Core Content Assessment—CCA, a Planning Portfolio, the Teacher Performance Assessment—TPA, the Final Student Teaching Evaluations—PGP and the Final Student Teaching Supplement, and a Unit Plan. Data are also collected on field experience performance. Results from these assessments reveal that candidates are developing solid to strong knowledge, skills and dispositions for teaching in the elementary grades (K-6).

Assessment 1 (Licensure Assessment): Four Praxis II Exams: Curriculum Instruction, and Assessment, Reading Across the Curriculum, Content Knowledge, and Principles of Learning and Teaching (completed prior to being recommended for licensure; typically, these tests are completed following the final semester of academic coursework, during student teaching, or immediately after student teaching). During 2011-2012, 42 candidates completed all or portions of the four Praxis exams. As the results in Table 1 reveal, average scores for each of the four tests, for both undergraduate candidates and MEd candidates, exceed the cut score required by the State of Tennessee. Of the four tests, Curriculum, Instruction, and Assessment (CIA) is the only test completed exclusively by Elementary Education teacher candidates. Six categories of understanding are evaluated by the CIA: Literacy; Mathematics; Science; Social Studies; Arts and PE; and General Information about Curriculum, Instruction, and Assessment. The Praxis II Report for AY 2010-2011, distributed in October 2011, reveals strong performance across the 6-tested categories for the CIA but does not sufficiently disaggregate the data by level to permit confident interpretation on a program level of the six subcategories. However, individual students’ overall scores, which ranged from 176-196 on the paper test and 169-199 on the computer version of the test, clearly demonstrate that each teacher candidate’s performance considerably surpassed the required State score.

Assessment 2 (Content Knowledge in Elementary Education): Core Content Assessment—CCA (completed prior to the candidate being recommended for student teaching: fall or spring for undergraduates and spring for MEd candidates). The CCA is an evidence-based, diagnostic analysis by the teacher candidate that represents a repertoire of knowledge in the four key content areas: Reading/Language Arts, Mathematics, Science, and Social Studies. It is designed to allow candidates to demonstrate their understanding of both specific content knowledge and the application of the knowledge to their teaching. It is an authentic assessment task through which teacher candidates demonstrate their understanding of specific content knowledge by analyzing elementary student work products. Candidates present analyses of the work samples provided (including interpretations of student thinking), verification for their interpretations, and implications for their teaching, based on their interpretations. Students complete the Core Content Assessment in a single administration and as a part of the Screening II process. New rubrics were developed in early fall 2011 for each of the CCA content areas; the rubrics align with the ACEI standards. The number of ACEI standard elements assessed by the rubrics varies by content area from one (Mathematics) to four (Literacy).

During 2011-2012, all but two undergraduate teacher candidates, and all MEd, candidates passed all four content areas of the CCA. Because the CCA is completed towards the end of the program, it is expected that candidates will achieve levels beyond emergent. As revealed by Table 2, the percentage of rubric items passed at the proficient or accomplished level, by MEd candidates, ranged from 70% (Science) to 100% (Social Studies). Percentages at the Undergraduate level were more varied, with Literacy, Science and Social Studies percentages similarly robust (87½% to 93%) , and Mathematics considerably lower (42% and 50% for fall and spring, respectively). Two candidates did not pass the Math portion of the CCA, one in the fall and one in the spring. Faculty within each content area determine whether students demonstrate adequate understanding to “pass” their portion of the CCA, which accounts for the higher passing rate for Math, despite the number of emergent ratings at the undergraduate level.

Faculty analysis of student performance in each of the content areas led to the following conclusions:

·  Literacy: Solid to strong performance overall; emergent ratings generally stemmed from misinterpretation of a particular type of assessment data (UG and MEd) or with application of knowledge to an instructional situation.

·  Mathematics: Although the rubric contains just one element, performance was analyzed on a deeper level to understand candidates’ knowledge of whole numbers, rational numbers, and negative numbers. Several candidates (UG) demonstrated difficulty with rational numbers and proportional reasoning (the word problems that some candidates generated for rational numbers were mathematically appropriate but would not have been appropriate for use with children; wording of the question may have been a factor). MEd candidates’ weakest performance was also on the proportional reasoning task.

·  Science: Candidates demonstrated understanding of the concepts children need to know to perform a specific novel task, but they had difficulty knowing how to deal with student perceptions and, in some cases, revealed misconceptions of their own (UG and MEd).

·  Social Studies: Although undergraduate candidates were able to draw on Social Studies knowledge in their responses, in general they did not mention provision of inquiry-based, real-life research opportunities or the use of content integration. MEd candidates demonstrated solid to strong understandings.

Assessment 3 (Ability to Plan Instruction): Planning Portfolio—Literacy, Mathematics, Science, Social Studies (completed as part of the corresponding methods course[s], across the Programs, prior to student- teaching). Candidates develop several lessons for each content area as part of their methods courses, using a detailed lesson-plan template that is consistent across program courses; one of these lessons in each content area is uploaded to TaskStream and, thus, becomes the basis for evaluation of the candidate’s ability to plan lessons across the primary content areas. Whenever possible, candidates teach these lessons and reflect on their teaching and students’ learning in conjunction with an accompanying practicum experience. For 2011-2012 data from all four content areas were analyzed at the undergraduate and MEd levels. Lessons are evaluated on areas specific to each of the content areas, as well as five common areas (Development, Learning and Motivation; Adaptation to Diverse Learners; Critical Thinking and Problem Solving; Active Engagement in Learning; and Assessment).

Modal ratings for content-specific standards for Literacy, Mathematics, Science, and Social Studies provide an overall picture of performance in the content areas at the undergraduate and MEd levels. Modal scores at the undergraduate level were as follows: Literacy (four areas assessed)—75% of the areas were rated proficient and 25% not applicable; Mathematics (three areas assessed)—67% proficient and 33% emergent; Science (two areas assessed)—100% proficient; and Social Studies (two areas assessed) emergent. Undergraduate Social Studies performance ratings in the spring were much stronger; they were based on course performance expectations rather than end-of-program expectations. For the MEd Program, modal scores by content area were Literacy (four assessed areas)—50% proficient, 25% accomplished, and 25% not applicable; Mathematics (three assessed areas)—100% proficient; Science (two areas assessed)—50% proficient and 50% accomplished; Social Studies (two areas assessed)—100% proficient. The not applicable ratings for Literacy are discussed in the section on Changes Needed.

Data for the five non-content-specific ACEI standards evaluated across Literacy, Mathematics, Science, and Social Studies are shown in Tables 3-4. At the undergraduate level, candidates begin to learn about lesson planning as sophomores in the Social Studies methods and practicum courses. They typically complete the other methods courses and planning assessments during their junior year or first semester of their senior year. Although there is variation across content areas as demonstrated by the percentages and modal scores, a progression of improved performance is evident from Social Studies planning to other content planning at the undergraduate level. In the MEd Program, planning as represented by this assessment occurs across a single academic year. MEd candidates’ overall proficient performance demonstrates their ability to plan effective lessons in specific content areas.

Assessment 4 (Student Teaching): Final Student Teaching Assessment—PGP and Final Student Teaching Supplement (completed at the end of the second student teaching experience).

The Final Student Teaching Assessment—PGP (Professional Growth Profile) is based on a developmental model that is characterized by ongoing and continuous evaluative feedback. Throughout the program the same criteria area used to measure growth and development in a formative as well as summative manner. The PGP performance evaluation covers four criteria: Subject matter knowledge; knowledge of learners and learning; conceptions of the practice; and initial repertoire in curriculum, instruction, management, and assessment. These criteria derive directly from the Peabody Teacher Education Conceptual Framework, which articulates criteria for excellence in candidate performance. The final PGP scores are a compilation of PGP student teaching evaluation results from the first and second placements that were completed by the University Mentor and Field Mentor. The Final Student Teaching Supplement is completed at the end of the second placement, a placement during which students typically teach all four content areas. The Supplement was first used during fall 2011; it was added to the PGP to permit further data gathering on nine areas of performance of importance for Elementary Education teaching, namely: Communication, diversity, classroom management, reflection, home/school interactions, Literacy teaching, Mathematics teaching, Science teaching, and Social Studies teaching. As with the PGP, reported scores for the Supplement are a compilation of the evaluations of the Mentor Teacher and University Mentor, but for just one placement.