EDTECH 505 (SP10)

Susan Ferdon

Week 10 Assignments

2.Go back and review chapters 1-8 in the Boulmetis & Dutwin text. Then write down

your thoughts about each chapter. Minimum 50 words for each chapter (1-8). It can be a summary and/or critique and/or your general thoughts.

Summary: Chapter 1 - What Is Evaluation?

Key to understanding evaluation is an awareness that it is a systematic process whose goal is program improvement. High-quality evaluations will address efficiency (cost), effectiveness (substantive changes) and impact (long-term sustainable changes) in every stage of the program. Reasons to evaluate include: 1) mandate, 2) make a case for a new program, 3) anticipate a need to justify, improve, or change a program, and 4) prepare to seek funding. Regardless of motivation, monitoring, formative assessments and summative assessments provide information needed to evaluate programs. Essential parts of the evaluation design format (p. 15) provide the starting point for program evaluation.

Summary: Chapter 2 - Why Evaluate?

Reasons to evaluate include: 1) determining benefits, 2) meet need for planning, and 3) determining effectiveness of approaches. Benefits, limitations and considerations vary based on the vantage point of stakeholders and funders. Benefits include return on investment, benefit to staff (e.g., deepen understanding, improve communication), identification of new audiences/applications, and greater understanding of outcomes/impact. However, change is not guaranteed and evaluation may leave the program open to criticism. Organizations evaluate programs to compare approaches or decide if programs should be retained/eliminated. Needs of stakeholders (sponsors, program staff, community members) determine the “why” of evaluation, which in turn, impacts evaluation design.

Summary: Chapter 3 - Decision Making: Whom to Involve, How, and Why

Everyone with a stake in the program – funders, staff, clients/recipients, and community members – should be involved throughout the program cycle (goals, needs analysis, program planning, implementation, summative evaluation) but that is often not the case. Involving an evaluator from the onset is preferred but is most common beginning in the implementation phase. The evaluator works with program staff to clarify goals and identify needs and help develop the evaluation design. He will look at specific activities, formulate evaluation questions, monitor, measure progress, and report to program staff. The evaluator’s summative evaluation reported to funders and other stakeholders, documents overall effectiveness.


Summary: Chapter 4 - Starting Point: The Evaluator's Program Description

The Evaluator’s Program Description allows all aspects of the program to be made clear, revealing goals, objectives and measurement tools that are in place. Components include goal statements, activities, and evaluation procedures. In creating the EPD it is imperative that the evaluator has a trusting relationship with program staff so that information that is shared is truthful and complete. This allows the evaluator to develop evaluation questions that provide a strong foundation upon which to design evaluation. The EPD should be geared toward the recipient of the evaluation and, in the case of differing audiences, multiple EPDs should be written.

Thoughts: Chapter 5 - Choosing an Evaluation Model

This chapter presented an excellent overview of evaluation models but of even greater value, were topics not included in other readings: 1) clarification of differences between research and evaluation, and 2) debate between those that say evaluator expertise is best and others that believe preconceived notions and bias will result. I particularly liked that names of the developers of each model were included as it made credible research easier when I wanted to learn more about the Decision-Making Model (Stufflebeam). Also, recognizing Michael Scriven’s (Goal-Free Model) name in EVALTALK e-mails reinforced that the text embodies current practice in program evaluation.

Critique: Chapter 6 - Data Sources

Descriptions and examples of the four types of data (nominal, ordinal, interval, ratio) were helpful, but examples of possible application - when to use what – would have made this information much more relevant. Sections related to interviews were quite informative. Though interviews may be the least valid and reliable form of data collection a novice can attempt, descriptions of question types, along with precautions, provide a good starting point. Examples in the Interview, Scales, and Sentence Completion sections provide the reader with a level of detail not present early in the chapter, and will facilitate greater success in real-world application.

Critique: Chapter 7 - Data Analysis & Appendix A

Data analysis is a topic that can fill volumes and the authors recognize that the prospect of working with statistics can be daunting. The unfortunate choice of beginning the chapter with interview data seemed only to muddy the waters. Explanations of terms (e.g., mean, median) that began with a simple definition were easier to grasp than those that began with a narrative example. Data analysis is math-based but the chapter was written with lengthy verbal descriptions that are potentially confusing. Had calculations been presented in a more math-like format (stacking numbers vs. listing) it would have lessened the cognitive load.

Thoughts: Chapter 8 - Is It Evaluation or Is It Research?

While methods for gathering information for research and evaluation may share characteristics, understanding differences between research and evaluation is key. Employing appropriate rigor for an evaluation will streamline data collection since resources will not wasted on unnecessary processes (i.e. gathering data from a control group when one is not needed). Descriptions of terms (p. 142-149) provided a good foundation and numerous examples clarified how and when to use various sampling methods but I would have liked to see more about surveys. Keeping the end in mind – program improvement – will go a long way toward guiding program development, implementation, and evaluation.

3. Review all the previously assigned Internet readings. Discuss the readings. Did one

or more stand out or make impression on you? If so, why? Your written comments can be specific about one or more of the readings and/or general comments about all the readings. Minimum 150 words.

As an educator new to evaluation, one reading that made an impression on me was An Educator’s Guide to Evaluating The Use of Technology in Schools and Classrooms (Quińones & Kirshstein, 1998). Written in layman’s terms, examples are drawn from K12 education, information is well organized, and sidebar headings make it easy to skim for specific information. Tables provide at-a-glance synopses of main ideas and worksheets provide support for the implementation of processes presented in the document.

The portion of An Educator’s Guide we read for Week Six relates to data sources. For each, a definition/description is followed by real-world examples. Pros and cons of various techniques are offered and practical advice provided. For most educators involved in an evaluation, the guide is sufficient to suit their needs. Those charged with planning and carrying out an evaluation, on the other hand, would likely benefit from additional information and greater detail. Though application of information and examples to other fields may be limited, as a budding educational technologist, this reading is eminently applicable to my future endeavors.

In the face of rapidly changing technologies, this guide - now more than 10 years old - remains a useful resource. Geared toward “educators or administrators with little or no research experience” (Quińones & Kirshstein, 1998, p. 1) examples relate to hardware and staff development – universal themes in educational technology. The Rivers School District scenario is easy to relate to and the evaluation process (pp. 2 – 43) is compatible with other course readings. For example, Why Am I Evaluating on pages 3 - 5 corresponds to Chapter 2 in The ABCs of Evaluation. Of particular interest are the technology surveys (pp. 59 – 113). Though wording would require updating to reflect current technologies, they are an excellent resource as well-constructed surveys can yield useful data.

Reference:

Quińones, S., and Kirshstein, R. (1998). An educator’s guide to evaluating the use of technology in schools and classrooms. Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement.


Bonus:

Design/create/develop another assignment that facilitates reviewing all the work in the course to date. I.e., come up with some assignment that could be used at this point in the semester the next time EDTECH 505 is taught. This bonus is optional and is worth two extra points.

A Hypothetical Situation: The Educational Technology Department at Boise State is reconsidering textbook requirements for EDTECH505 – Evaluation for Educational Technologists. In 200 words (minimum) present your case for or against continued use of Boulmetis and Dutwin’s The ABCs of Evaluation. Support your views with examples from each chapter in the text and all Internet readings.

(Note: Given a choice, I wouldn’t select this assignment – I like the one we did better because it is more open-ended.)