Assessing Student Learning in General Education:

Practices Used at Texas Public Colleges and Universities

Based on the 2009-2010 Accountability Peer Group Meetings and the 2009 General Education Assessment Practices Survey

In recent years, national conversations about the value of higher education have led to renewed attention on the topic of general education and those educational outcomes that should be expected for any student who completes a college degree. Research with employers and educators suggests the need for additional emphasis in higher education on broad-based skills such as critical thinking, effective communication, and teamwork, in addition to content knowledge. In Texas, these conversations have become a focus for the Undergraduate Education Advisory Committee (UEAC), which has been tasked with revising the Texas Core Curriculum, the portion of general education held in common by all Texas public colleges and universities. As part of its work, UEAC is developing overarching objectives for the core curriculum that cut across disciplinary boundaries and reflect the skills all students will need for future success in higher education and careers.

The national conversation about essential educational outcomes has been paralleled by a discussion of the importance of identifying effective ways of assessing student learning. From an accountability perspective, federal and state agencies and higher education accreditors have demonstrated considerable interest in seeing evidence that students are attaining the learning outcomes expected of them in higher education and that colleges and universities are using assessment data as the basis for continuous improvement. Texas public colleges and universities, which are accredited by the Southern Association of Colleges and Schools Commission on Colleges (SACS), are expected by SACS to demonstrate the extent to which their graduates have attained the general education competencies defined by the institution and are asked by the Coordinating Board to evaluate the extent to which core curriculum objectives are being achieved.

In light of these requirements and the expectation that UEAC’s work will produce a new set of educational objectives for the core curriculum, the Accountability Peer Groups spent their 2009-2010 meetings discussing how their members assess student learning in general education. At the first two meetings in Fall 2009, the groups discussed the general education outcomes designated at their institutions and how those outcomes connect to the basic intellectual competencies, perspectives, and exemplary educational outcomes established for the current Texas Core Curriculum. They also reviewed how each institution currently structures the assessment of general education outcomes and the advantages and challenges associated with different assessment processes and methods. At the third meeting of the Accountability Peer Groups, in Winter 2010, the discussion focused on the instruments and criteria used to assess two key general education outcomes: writing and critical thinking.

This report describes some of the practices for assessing student learning in general education identified by Texas public colleges and universities through the 2009-2010 Accountability Peer Group discussions as well as through several open-ended questions included on the 2009 General Education Assessment Practices Survey conducted by the Higher Education Policy Institute. The focus of the report is on practices that have the potential to be adapted by institutions as they respond to the changes in the core curriculum that are expected to result from UEAC’s work. The report begins with a discussion of several possible approaches to assessing general education currently in use at Texas public colleges and universities, including institutional portfolios, standardized exams, and course-embedded assessment. The final section of the report uses the discussions that took place at the third Accountability Peer Group meeting to look in more detail at how Texas institutions assess writing and critical thinking.

General Education Assessment Process

Assessing general education is a particular challenge for colleges and universities. Unlike the assessment of discipline-based programs, there is often no clear ownership of general education within an institution. Core and other general education courses are taught by a range of departments but are usually expected to contribute to a set of institutional general education outcomes that cut across disciplinary boundaries. In many cases, institutions find it easiest to focus their general education assessment efforts on the courses in which key knowledge, skills, and attitudes are taught, but that approach does not address the question of whether students can apply and expand on those knowledge, skills, and attitudes in other courses. On the other hand, assessing a sample of students across the institution, whether using a standardized exam or examples of student work, gives a broader picture of student attainment of general education outcomes, but the data such assessments generates may be harder to tie back to improving instruction in individual courses. The examples of general education assessment practices described here therefore include a mix of institutional and course-embedded assessment approaches.

One key concern raised by many institutions is how to effectively account for transfer students when conducting general education assessment. In this era of student swirl, both two- and four-year institutions note that many students do not complete the core curriculum on their campuses. This situation raises the question of where accountability for student learning in general education lies. Some institutions have addressed this issue by focusing their general education assessment process on students who have completed a certain number of hours or who have completed the majority of the core curriculum at that college or university. This option adds an extra layer of complexity to an already complex process and leaves open the question of how possible it is to assess the extent to which students who take general education courses at multiple institutions have attained the outcomes deemed essential by the state.

Institutional Portfolios

The institutional portfolio approach to assessing general education was developed by Jeffrey Seybert at Johnson County Community College in Kansas and is based on the principle that general education is the responsibility of the entire faculty and should be assessed using existing student work. In this approach, a centralized group such as an assessment committee identifies classes in which students could be expected to demonstrate their grasp of general education outcomes. Faculty members are asked to provide copies of student work from those classes, and the artifacts are then scored by interdisciplinary teams of faculty members using rubrics developed at the institution for each of the general education outcomes.[1] Variations on this method for assessing general education outcomes are being used by a number of Texas public colleges and universities.

Schools using the institutional portfolio method have taken some different approaches to selecting the sample of courses and students to be assessed. At Amarillo College, for example, courses for assessment are selected using a stratified random sample of courses from across the curriculum, not just those in the core, that have been identified through curriculum mapping as addressing a particular general education outcome. Faculty members for those courses are requested, but not required, to provide a set of student work together with the assignment to which students responded. Once faculty members have sent in the student work, the director of assessment removes any artifacts produced by students who have not completed at least 30 college-level hours at the institution and also removes anything that might identify the individual student, course, or instructor. Several other colleges, including Lee College, Del Mar College, and Texas State Technical College-Harlingen, have also adopted the 30-hour requirement for their student samples.

Tyler Junior College uses a similar approach but has elected to assess only in core courses. This choice made it difficult to find students who had completed 30 hours so they set their cut-off point at 20 hours instead. South Plains College and Wharton County Junior College also collect their sample of student work only from courses that are part of the core curriculum but do not set a specific number of hours for the students whose work is to be assessed. Richland College, on the other hand, identified a set of courses taken most often by students when they are close to completing the core curriculum and took their samples from those courses.

An alternative approach to sampling is used at The University of Texas at Brownsville/Texas Southmost College. In this model, departments are responsible for collecting student artifacts from all students completing both associate’s and bachelor’s degrees, usually from senior seminars or capstone courses. Departmental faculty then score the student work using rubrics developed at the institutional level. A general education assessment committee also scores a random sample of the projects collected by the departments.

Some institutions have opted to adopt a variant of the institutional portfolio method but do not use it for all general education outcomes. Texas A&M University has been conducting a writing assessment project coordinated by the Office of Institutional Assessment and The University Writing Center, in which sets of student papers are collected from upper-division courses and, as in the previous examples, are assessed by a cross-disciplinary group of faculty members using a common rubric. As a component of their Quality Enhancement Plan (QEP), Collin County Community College has brought faculty from various disciplines together to develop a common rubric to assess critical thinking across the curriculum. Several other colleges and universities are also using this assessment method for selected outcomes such as writing and critical thinking that cut across disciplinary boundaries but not for more specific general education outcomes such as mathematics.

The type of assignment used for institutional portfolio assessment also varies at different schools. At many colleges, the faculty members whose classes are selected for assessment are asked to send whatever assignment they wish to have be part of the assessment. Other institutions have taken a more directive approach and assisted faculty members in developing assignments that demonstrate the relevant outcome. Richland College, for instance, brought together reading/writing faculty with faculty from courses selected for the general education assessment to work on developing similar assignments to be administered near the end of the semester. Lamar State College-Port Arthur also brings together liberal arts faculty, primarily reading/writing instructors, to develop common assignments and rubrics for writing projects that are used to evaluate general education outcomes across the disciplines.

Institutions generally take one of two possible approaches to scoring the student work collected for institutional portfolios. In the first model, all of the members of a scoring team for a particular outcome read all of the artifacts for that outcome. This process can be quite time-consuming. The institutions that use it, though, value the opportunity for their committee members to see institutional strengths and weaknesses as related to that general education outcome. Amarillo College, which uses this approach, has teams made of five to six faculty members from across the institution who serve for three years. They are not paid for this work, although the college is thinking about emphasizing the work in the promotion and tenure process, and because each team member is expected to read a sample of 100 student artifacts, the assessment process takes a full year.

The alternative—used by institutions such as Tyler Junior College, Del Mar College, and the University of Houston—is to have each artifact read by two or three readers, sometimes with the third reader brought in if the first two readers’ scores are too far apart. This process is most often done in a group session, with participating faculty members provided with a stipend, or at least a meal, to compensate for their time and with an initial training session to accustom participants to the use of the particular rubric. Several colleges and universities note that these scoring sessions promote faculty engagement in the assessment process and can lead to useful conversations among faculty members from different disciplines about how general education outcomes are taught in their fields.

Many of the colleges and universities using this method to assess general education outcomes form interdisciplinary teams of faculty to score the student work. However, some institutions, including Northlake College and Northwest Vista College, all of which are using this method to assess writing, have the student work scored primarily by English or reading/writing faculty who have special expertise in assessing writing.

Other colleges have moved further from the original institutional portfolio model and have faculty members assess their own student’s work while still using a common rubric for each general education outcome. At South Texas College, for example, faculty members teaching capstone courses have been asked to develop an assignment based focused on a specific general education outcome and then assess their students’ performance on the assignment using a rubric developed by the general education assessment committee.

The institutional portfolio model and its variants have the advantage of allowing colleges and universities to consider the extent to which their students are demonstrating learning on general education outcomes across the curriculum. Because of the relatively small size of the samples used, however, it is not always possible to connect assessment findings to specific curricular concerns, although several institutions noted that their assessment results encouraged them to emphasize the need for further work on specific outcomes across the curriculum. The institutional portfolio process is also time-consuming and requires both centralized coordination and considerable commitment from the faculty members who score the student artifacts. Nonetheless, for many of the institutions that use this approach, the process has helped engage faculty members in assessment and build a sense of ownership for general education at the institutional level.

Standardized Testing

One alternative to the labor-intensive institutional portfolio approach is the use of standardized general education exams. In Texas, these exams are currently used or are being adopted by nearly all four-year institutions, in part because the requirements for the Voluntary System of Accountability mandate that participating universities use this sort of exam. On the other hand, only about a third of two-year institutions use standardized general education exams. Such exams provide a relatively straightforward way to collect comparable data about student learning on general education outcomes, but institutions participating in the Accountability Peer Groups raised a number of concerns about the cost of these exams, how the exams are administered, lack of student motivation to perform well on the exams, and the extent to which findings are useful for improving instruction.