Chapter 3 & 4 Reading Response

Where do the alternative views of evaluation originate from?

The various types of evaluation stem from philosophical beliefs. These philosophical beliefs can then lead to the other factors that influence the final evaluation program.

·  Worldviews: Epistemological issues

·  Methodological approaches

·  Metaphoric views of evaluation

·  Needs perceived- new data

·  Practical Considerations- what is the intent of the evaluation, the politics involved, prior experiences with evaluation, who should do the evaluation,

The evaluation not unlike a research approach finds it origins in the epistemology. The worldview is a set of beliefs that guide the direction or action an individual will take. The philosophical development of how we come to know has lead to a number of different views on acquiring knowledge. Evaluation programs acquire knowledge so it makes some sense that research and evaluation directions chosen are rooted in a philosophical belief.

a)  Objectivist epistemology can place the evaluator outside the system being evaluated, separate from the system and they will discover the knowledge. The objectivist approach would typically be a scientific and quantitative evidence would be a product of the approach. An objectivist would test a theory or look to goals to then measure against (positivism-cause and effect). Up to the 1980s, quantitative research was leading the way and likely mirrored in the world of evaluation.

i) Utilitarian principles: This approach will place a “value on the overall impact of the program on those affected”. The evaluator will focus on “the total group gain by using average outcome scores”. The goals are set and measured leading to a quantitative evaluation methodology.

b)  Subjectivist epistemology will suggest that knowing comes from experience and that may include connections of many experiences to create complex understandings. Subjective approaches will lead to qualitative methodological evaluations or potentially a mix of qualitative and quantitative methods. This type of worldview will lead into the participatory and empowerment evaluation frameworks from previous readings. The social constructionists and pragmatists would also fall into this belief that experience leads to knowing.

i) Intuitionist-pluralist principles: The value is placed on the impact of the program on the individual. Therefore more participatory methods follow from this value system. The participatory approach lends itself towards qualitative methods or a mixed methods approach to collecting information.

The philosophical view may inform the methodology used in an evaluation but an evaluator should determine which of the many worldviews and value systems would fit the evaluation question at hand.

How are metaphors connected to evaluation?

Metaphors are attempts to link our experiences and create new perceptions. How we perceive programs may influence the evaluation program used. The example used was social programs are equated with industrial production and as a result machines or assembly lines are used as metaphors for social programs. That may lead an evaluator to look at efficiencies and a quantitative methodology. Social programs may also be equated with sports and the result would be a focus on targets and goals.

What is the classification schema for evaluation approaches?

Although the schema below has boxes around different approaches, I am sure that there is blurring along the lines and it resembles a continuous spectrum. The schema below is a continuum that works downwards from epistemology, to values, methods, to needs.

Objectivism Subjectivism

Utilitarian Evaluation Intuitionist-pluralist Evaluation

Quantitative Qualitative

Objectives-oriented Consumer - Expertise- Naturalistic & participant

Management oriented oriented oriented oriented

Objectives-oriented: focus on specific goals being attained

Management oriented: meeting the informational needs of managers

Consumer oriented- developing evaluation information on products

Expertise oriented- depends on the application of professional expertise to judge quality in an evaluation

Naturalistic &

Participant oriented- participants are key in identifying values, needs, data, & conclusions for evaluation

What is objectives-oriented evaluation?

Objectives-oriented evaluation is an evaluation that focuses on the attainment of the goals, purposes, standards, etc. for some activity.

What do each of the contributors add to the objectives oriented evaluation approach?

Tylerian evaluation approach: the extent to which the objectives of a program are attained. His approach involved establishing goals, classifying goals, defining the goals in behavioral terms, find situations in which achievement can be shown, select measurement techniques, collect performance data, compare the performance data with the behavioral objectives. Tyler’s approach required the goals to be screened either with a logical method or an empirical method.

Metfessel’s contribution: He established goals and interpreted performance against the goals. Metfessel added a number of instruments to collect data.

Provus’s contribution: He developed a 4 stage developmental (5th stage optional) process a program goes through and a sense of the type of evaluation that should occur at each stage.

i)  Definition: a clear set of standards is defined to direct all further evaluation.

ii)  Installation: the evaluator measures program operation against the standards and looks for discrepancies.

iii)  Process: evaluator gathers data on participants to see if their behavior has changed.

iv)  Product: the evaluator determines if the final objectives were met.

How do you apply objectives oriented evaluation?

Educational standards movement

Criterion reference assessments

Drug and alcohol prevention programs

Management by objectives, Planning, Programming, and Budgeting systems,

Politically- Government Performance and Results Act

What are the pros and cons of objective evaluation?

Positives: simplicity, clearly defined outcomes allows evaluators to assess discrepancies, and technically sound measurement strategies

Limitations: lacks a real evaluative component (not explicit judgments on worth), neglects the values of the objectives themselves (but there is a filter to pass them through), neglects the environment the evaluation takes place (sounds like ethnography a little), may miss unintended outcomes and omit evidence not tied to explicit outcomes, promotes tunnel vision

What is goal free evaluation (a bit like ethnography observing without interacting for meaning)?

·  The evaluator avoids an awareness of the goals (preventing tunnel vision)

·  Predetermined goals are not permitted to narrow the focus of the evaluation study

·  The focus is on actual outcomes and not intended outcomes

·  The goal free evaluator has minimal contact with the program manager and staff

·  Goal free evaluation increases the likelihood that unanticipated side effects will be noted

Role of the evaluator:

The reading for session 3 was helpful in placing what seemed to be a nebulous field into a framework. Although the framework (schema) is much more like a continuum it is still helps more linear thinkers. The organization of the evaluation approaches was beneficial to me because it reinforced the idea that the nature of the evaluation program should direct the evaluator towards a particular approach. The evaluator should not see themselves as an objective-orientation evaluator, a participation-oriented evaluator, or one of the other perspectives.

As an evaluator knowing that your worldview, experiences, and metaphors may create a biased approach to a particular evaluation program. It seems that the evaluator would need to guard against personal biases by truly understanding the program being judged.

Connection to my role:

The tunnel vision that can be created by an objective-oriented evaluation program is a significant concern in education. The standards based evaluation programs in schools creates a very specific set of results. The discrepancy between the standards and performance is quite clear but I wonder what unintended outcomes we are missing?

The standards based movement provides data that informs policy provincially, nationally, and internationally. The objective-oriented assessments or evaluations may be misleading policy because they are in fact not judging unintended outcomes. The need for qualitative methods is necessary to avoid tunnel vision and misinformed policies. Including qualitative data may provide a much richer picture for the evaluator.

At this point, I think the limitations of objective-oriented evaluation programs would lead me to a more mixed methods approach in education. Of course, it would depend on what I was evaluating.