NDTL: “Exploring Faculty Development through Best Practices and Innovative Ideas”

Chapter 5

“Innovative Ways of Assessing Faculty Development”

By L. Dee Fink

During the past few decades, faculty developers within the US and internationally have steadily become more aware of the need to do a better job of assessing our programs. We are part of institutions of higher education that are also growing more aware that they not only need to “support” learning by professors and students, they need to “be” learning organizations. This was presciently presaged by Donald Schön back in 1973 when he wrote the following statement about the needed characteristics of colleges and universities in a changing society:

“we must become able not only to transform our institutions in response to changing situations and requirements, we must invent and develop institutions which are learning systems, that is to say, systems capable of bringing about their own continuing transformation.” (quoted in Stefani, 2011, p. 222)

If one of the central purposes of these educational institutions is to provide high quality teaching and learning, then faculty development programs play a strategically central role in this process of continuous “self-transformation”. And that means we need to understand how well we are supporting this goal of self-transformation and how well we are self-transforming ourselves.

The good news is that, as the list of references for this chapter testifies, there is now a sizeable body of literature that contains descriptions of good practice, ideas about what we should do, and periodic reviews of this literature.

The not-such-good news is that, when survey studies have been done on what programs actually do for assessment (e.g., Levinson-Rose and Menges, 1981; Chism and Szabó, 1997; Hines, 2009; and Kucsera and Svinicki, 2010), the results generally indicate that most of us make an effort to measure participant reaction or satisfaction at the end of particular events but few of us manage to go much beyond that. The big challenge, of course, is: How do we go beyond attendance and satisfaction, to get at the questions of: How much impact do we have on the professors’ teaching practices? And, even more challenging, do those changes improve student engagement and student learning?

There are some fairly obvious reasons why we don’t do more even though we may recognize the value of doing so. Sometimes we just lack the “know how”’. Or we lack the time and personnel resources. Or, third, we may believe that it isn’t possible to really assess whether our programs lead to changes in the professors’ teaching and hence to better student learning; the results would be complicated by too many other factors. (Adapted from Grabove et al., 2012, p. 10)

The viewpoint taken in this chapter is that we can and should go beyond the obvious need for knowing about participant satisfaction, in order to achieve a deeper form of program assessment. To help us learn how to do this, I will share several examples from programs where the directors have done a more thorough job, and try to answer questions we all face: How did they overcome the barriers to better program assessment? And, specifically, what did they do?

Some of this information comes from publications; some comes from personal communication from multiple people who were kind enough to respond to my request for examples when I was preparing to write this chapter.

Theoretical Perspective on Assessment. Reading both the published and unpublished accounts of what people did for program assessment, has led me to take the view in this chapter: Assessment consists of a series of decisions that the person(s) doing the assessment must make. For example:

·  WHY are we assessing?

o  Who is the intended audience for the assessment? And what do they want to know, i.e., what is their primary question?

·  What is the SCOPE of Our Assessment?

o  The whole program? A signature activity? One particular event?

·  WHAT INFORMATION do we need to collect?

o  Should we be collecting information about: participant learning, changes in their practice of teaching, student learning, etc.?

·  HOW can we collect the desired information?

·  How should we USE or PRESENT that information?

This chapter will share ideas, information, and examples for each of these central questions, and then wrap up with some suggestions on the general characteristics of good assessment procedures.

WHY Are We Assessing?

The first question we need to answer is: Why do we want to asses our program? That is, what is the programmatic need that justifies the expenditure of time, effort and perhaps money in order to gather assessment information? The suggestion that we think carefully about the purpose of our assessment in fact leads to two important sub-questions: Who is the intended audience for the assessment information, and what is their primary question?

There are at least four potential audiences:

  1. Ourselves: As the people leading or helping implement a program, we want to know things like: Are we offering the right set of activities within our program? Given the relative costs associated with each activity, how effective is each one in terms of having an impact on faculty teaching and student learning?
  2. Prospective Faculty Participants: Do we want to convince more professors that it would be worth their while to participate in our program activities? If so, then we probably need to focus on whether currently participating faculty have acquired ideas that have had a discernible impact on such things as (a) raising student evaluations, (b) increasing student engagement, (c) increasing the quantity and quality of student learning, (d) enabling them to teach effectively with less time, e.g., by using good rubrics to more efficiently assess student work.
  3. Institutional Administrators: Administrators not only have to decide the level of funding for our program but also how strongly to encourage their faculty to participate. Hence their questions related to both the impact of participation of quality teaching and learning, and the relation of all this to cost, i.e., what some would call “ROI”: “Return-on-Investment.”
  4. External Funding Agencies: When we apply for off-campus funding, these agencies find it helpful to know whether a particular program has a good “track record”. This usually includes information about our ability to attract a high level of faculty participation as well as the ability to offer effective programs, i.e., events that enable faculty to change and improve their teaching and student learning.

Each of the four audiences described above are interested in the “impact” question, i.e., whether participation led to changes in teaching that in turn led to improved student engagement and better student learning. But each audience also has distinct concerns that have implications for what data we need to collect.

What is the SCOPE of Our Assessment?

Knowing for whom we are assessing and knowing the central question(s) of that audience, will clarify the question of what we want to assess: the whole program; a signature activity, i.e., one that plays an especially important role in our program; or one or more “regular” activities, e.g., consulting services, workshops, etc.

If our central purpose is to answer for ourselves or for campus administrators the question of whether our program is achieving a high “return on investment”, we presumably want to collect information about the whole set of program activities, regardless of which activity any particular person participated in. Kelley’s report of the assessment activities at the University of South Dakota in their 2010-2011 Annual Report takes into account all their activities, as does Roberson’s 2012 ITLAL Survey at the University of Albany.

On the other hand, if we have a particular activity that plays an especially important role within our whole program and we want to know whether it is working well or needs to be reviewed and revised, we may want to focus specifically on it. A number of the published assessment reports are focused on such signature activities:

·  a 4-day retreat at Stellenbosch University in South Africa (Cilliers and Herman, 2010)

·  a year-long program consisting of multiple activities aimed at promoting critical reflection and inquiry about teaching, primarily among junior faculty at Northwestern University (Light et al., 2009).

·  A Faculty Fellowship Program, a multi-faceted program aimed at promoting the use of technology-enhanced teaching across campus at the University of Minnesota (Brooks et al., 2011).

·  SoTL initiative on grant recipients, e.g., asking whether it impacted their own teaching or their interactions with faculty across campus at Illinois State (McKinney and Jarvis (2009).

One other project that has garnered a lot of attention involved an assessment of extended training programs for junior faculty at 20 institutions in 8 countries (Gibbs and Coffey, 2004).

The third option is to assess multiple activities simultaneously and individually. When Hines collected information about the assessment activities of 20 different faculty development programs in the state of Minnesota, the framework involved collecting data about each of the activities in a particular faculty development program, e.g., individual consulting, workshops, etc.

WHAT INFORMATION Do We Need to Collect?

What is it we need to collect information about? This question leads into a re-consideration of what faculty development programs are and what they are trying to accomplish.

Initially, my own conceptualization of faculty development programs was somewhat linear and not very complicated, essentially following the ideas of Kirkpatrick (1994) and slightly modified by Guskey (2000). My initial view was: Our programs provide activities that we hope will (a) attract good participation and (b) generate participation satisfaction that they learned something, which will then (c) lead to better teaching and eventually to (d) better student learning. In my view, this is still the heart of what we are trying to do.

But after reading the accounts of what other program directors are trying to assess, which is a reflection of what they think they are trying to do, I found that I needed to enlarge my initial conceptual perspective.

Figure 5.1 lays a new conceptual scheme for the potential scope of faculty development activities. Not all programs are trying to do all this; this is why it is called the “potential” scope. But each of the elements in this diagram was included in the assessment effort of some program described in the literature I encountered. Let me comment on the components of this diagram.

(***Place Figure 5.1 near here***)

Faculty Development Activities: Interacting with teachers (full-time professors, adjuncts, graduate teaching assistants) is the primary function of essentially all programs; arrow “A” refers to this interaction. Many programs, though, have begun to realize that there will be limits on the impact of their program unless they can encourage changes in campus policies, campus culture, etc. Hence they also work on “organizational development” by interacting with campus leaders (arrow “C”), meaning administrators but also faculty who are committee chairs or simply opinion leaders on campus.

Teachers: When we offer program events for campus teachers, one element of obvious interest is how well they attend and whether they are “satisfied” that they learned something. Sometimes we assess this latter factor informally; sometimes we distribute questionnaires aimed at obtaining participant reaction to the event.

But we also want their participation in the event to lead to something more. This is one of the major challenges to better assessment of our programs: How do we do a better job of determining the post-event impact of our activities? In the US, we often speak of wanting to get professors to use certain teaching practices, e.g., active learning. In Europe and British Commonwealth countries, faculty development writers also aspire to changing teachers’ conceptions and approaches to teaching which they see as a necessary precondition for sustainable changes in teaching behavior. In my

1

Figure 5.1 The Potential Scope and Purpose of Faculty Development Programs

1

program at Oklahoma, I found that professors sometimes developed a more positive attitude toward teaching. When they were able to talk honestly and openly with others about problems and discover that others also had problems, participants became less discouraged about their own teaching and more hopeful that better teaching was actually possible.

Students: My view of the impact we might want to see on students has also been enlarged. We want better learning, of course. But even as some think we need to attend to professors’ conceptualizations and approaches to teaching, some believe that we also need to think how teachers might have an impact (arrow “B”) on students’ approaches to studying and learning, and how we can improve student engagement, i.e., the energy they spend doing the work of learning.

Institutional Leaders: There are a number of ways in which administrators and faculty leaders affect the actions of both teachers (arrow “D”) and students (arrow “E”).

An increasing number of faculty developers recognize how strongly administrators can influence faculty response to opportunities to engage in learning about teaching. For example, when campus leaders send a message that in essence says, “If you want to participate in faculty development, that is OK; if you don’t want to, that is OK too”, you can expect a lukewarm faculty response to faculty development events. The authors of several articles (e.g., Brooks et al., 20011; McKinney & Jarvis, 2009; Stes et al., 2010) wanted to understand the role of organizational development in their programs, that is, whether (a) institutional policies and practices affected the readiness of faculty to participate in activities or implement ideas received, or (b) whether faculty were able to share their ideas with others, e.g., was there a forum and openness to sharing ideas about teaching with colleagues or with administrators?

Leaders can also have an impact on student actions (arrow “E”), for example, by instituting curricular programs such as campus-wide learning portfolios or freshmen seminars.

HOW Can We Collect the Desired Information?

The preceding array of multiple topics we might want to know about is wonderful, but it creates a major problem: How can we possibly collect information about all these topics, especially about actual changes in the teaching practices of professors and in the learning of their students?