JOSEPH W. BROWN

BOOK REVIEW
 
 

Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability. Edited by David M. Fetterman, Shakeh J. Kaftarian, & Abraham Wandersman. Thousand Oaks, CA: Sage, 1996, pp. xii + 411.

 


When the concept of empowerment evaluation served as the theme of the 1993 annual meeting of the American Evaluation Association, the stage was set for a debate on the essential purpose and utility of program evaluation. Traditionalists in the field no doubt dismissed empowerment evaluation as the latest iteration of action research, heavy on qualitative methods and lacking the requisite rigor and objectivity. Worse, by its emphasis on self-assessment, and self-determination of values and standards, was not this methodology tantamount to "giving evaluation away" to the people? Indeed, the very term empowerment evaluation was considered oxymoronic by many observers. The extent to which the conference's highlighting of an innovative methodology resulted in constructive reflection about the field is also somewhat dubious: "Evaluation innovators can help advance the theory and practice of evaluation even when they set forth confused or wrong proposals."1


Meanwhile, nearly four years later, empowerment evaluation persists as a special alternative to the set of more conventional program evaluation methodologies. What was all the fuss about? David Fetterman's 1993 presidential address on empowerment evaluation outlined a whole new way of looking at program evaluation, focusing on participatory management and assessment of programs for the purpose of fostering improvement and self-determination. While at first glance these concepts appear similar to performance-based management and group evaluation, aspects of the new management paradigm with which many of our federal agencies currently grapple, the basis of empowerment evaluation rests not in the realm of management science or efficiency but in politics. Especially in the field of public health, empowerment activities are increasingly equated with social change and are "reflective of a culture that promotes social responsibility and social justice, rather than individual satisfaction in isolation from one's community and society"2 Empowerment evaluation shifts the perspective from external, purportedly objective assessments of program merit down to an internal, community-based perspective vested firmly in the program participants themselves. In doing so, several relatively radical ideas emerge.


First, the purpose of empowerment evaluation is not simply to empirically estimate a program's worth but to develop skills needed for ongoing self-assessment, so that evaluation itself is institutionalized and made sustainable at the program level. As such, evaluation becomes a means of achieving organizational strength and renewal, effectively turning the classic definition of evaluation&emdash;an object of interest is compared against a standard of acceptability&emdash;on its head. Second, the empowerment evaluator plays the role of coach, deliberately advocating on behalf of the program. Third, a program's value assessments under empowerment evaluation are flexible and ongoing; the convention of assigning static values to outputs and outcomes is dropped, in recognition that these quantities shift over time. Fourth, an empowerment evaluation is conducted by the program participants themselves, especially in the area of developing criteria for determining program worth. Under this methodology, then, no group attempting to implement a social program is subjected to the judgment of outside evaluators using the hegemony of unassailable program standards. Empowerment evaluation is political because it explicitly shifts power from the privileged position of the evaluator to the program participants, ideally resulting in a more productive partnership.

Health Education & Behavior, Vol. 24 (3): 388-391 (June 1997)
© 1997 by SOPHE
388



Book Review 389


This book's authors, however, have a tendency to treat empowerment evaluation as a unique, innovative discovery, rather than an integration of existing methods. Methods of evaluation that empower date at least to Lewin's early work on action research,3 and empowerment is central to both the more formative4 and the recent work on participatory evaluation.5 Fetterman states that empowerment evaluation has its roots in community psychology, action anthropology, and "also derives" from participatory methods of evaluation, but a careful read leaves one with the impression that the bulk of theorizing is drawn from Zimmerman's work on empowerment.6 Action anthropology may provide partial support but readers unfamiliar with that field are likely to remain so. And although the authors of chapter 8 provide a more specified conceptual framework for the model, examination of the relationship between empowerment evaluation and general empowerment theory still leaves some questions unanswered; do programs, for example, constitute a level of empowerment distinct from individual, organizational, or community empowerment? Finally, cursory treatment of theory is evidenced by the fact that in over 384 pages of text, the name Paulo Freire appears only once. Without Stephen Fawcett et al.'s chapter, the book's theoretical contribution would be lacking.
The objective of the book, however, is not to position empowerment evaluation on the complex plane of empowerment theory. Rather, it is a programmatic primer (though many of the chapters are connected to Fetterman's version of empowerment evaluation only in an indirect or ad hoc manner). The authors are mainly concerned with practice; given a set of criteria constituting empowerment evaluation, what are the potential programmatic and contextual issues in applying these criteria to an intervention? Papineau and Kiely have acknowledged that evaluators in a participatory context must be prepared to accept the wishes of participants and thus have many methodologies at their disposal.7 But Fetterman and colleagues ask evaluators to make more explicit their politics, their sense of advocacy for certain types of programs, which is what principally distinguishes empowerment evaluation from other methods of action and participatory research.


The methodology's shift from the ostensibly values-free, objective domain of the field implies a deliberate philosophical choice on the part of the evaluator: whether to use experimental or quasi-experimental designs to objectively and empirically estimate a program's impact or to undertake an advocacy approach in which empowerment evaluation is used to maximize the probability of certain worthy projects succeeding. If the latter route is chosen, the obvious question is, Which groups or which projects are worthy of being empowered? By the end of the first chapter, Fetterman helps provide the answer: empowerment evaluation is especially directed&emdash;in fact, is biased&emdash;toward the disenfranchised, including minorities, disabled individuals, and women.


It was the politics of empowerment evaluation that caused all the fuss at the 1993 conference: that instead of policymakers making judgments about social programs with the assistance of evaluation results, the process of an empowerment evaluation was in and of itself a policy judgment. This may not reflect evaluation research "providing the most accurate information practically possible in an evenhanded manner."8 At the same time, however, even the classic works that undergird the intellectual basis for traditional impact evaluation recognize that no research design is free of subjectivity, or, more accurately, completely objective.9 Preventive health in a public health context, for example, is arguably an appropriate field for advocacy research. The plethora of quantitative and "objective" studies in public health research has already served to identify population groups most at risk for disease or other undesirable health outcomes. Some of these groups, who are at the highest risk for many of the public health issues at the top of our nation's health agenda, are the same disenfranchised groups for whom Fetterman advocates an empowerment approach, and programs based in these communities should indeed be empowered to the extent possible.
Patton has described the emergence of the evaluation profession as coinciding with the need of large government efforts to constrain and better target project resources while maximizing results: evaluation, like the urban poor, grew up in the projects."10 The analogy can be further stretched: on one hand, the new trend in many government-financed efforts is to more firmly anchor project involvement with multiple stakeholders at local levels, and on the other, the concept of community-based interventions and creating capacity has achieved prominence in public health research.11 Both trends suggest that with empowerment evaluation, we have come full circle: just



390 Health Education & Behavior (June 1997)

as the projects have changed, so have the methodologies used to evaluate those projects, and empowerment evaluation represents the logical extreme at this end of the methodological continuum.
The book's first chapter, "Introduction to Theory and Practice," is written by Fetterman and can be read as the draft canon for empowerment evaluation, although for the most part Fetterman writes not as an apologist (some, however, would read such Fetterman terms as intellectual intoxication and research epiphany and disagree). The unique facets of the methodology are discussed here, which include training, facilitation, advocacy, illumination, and liberation. The four steps of any empowerment evaluation are also outlined: (1) taking stock, (2) setting goals, (3) developing strategies, and (4) documenting progress. Beyond this, the usefulness and qualify of the chapters vary widely. The book is divided into six parts, of which the first part constitutes Fetterman's introduction and the last part consists of Fetterman's concluding thoughts. In between is a wide range of (15) chapters dealing largely with programmatic applications.


Part 2 is titled "Breadth and Scope" and includes two informative examples of how empowerment evaluation has been used in academia on one hand, and foundations on the other. Henry Levin's chapter describes the Accelerated Schools Project, which embodies the principles of empowerment evaluation (self-determination and ongoing, internal improvement exercises) and is thus an appropriate model for what empowerment evaluation might look like in a large project. Ricardo Millett's chapter discusses the philosophy of evaluation at the W. K. Kellogg Foundation, which has adopted an empowerment approach. Part 3 is titled "Context" and consists of four chapters that together attempt to describe the characteristics of program environments that make the conduct of empowerment evaluation easy and forthcoming at one extreme, and difficult and resistant at the other. Especially interesting in this section is the chapter by Cheryl Grills and colleagues, "Empowerment Evaluation: Building Upon a Tradition of Activism in the African American Community," in which the authors discuss how the intersection of community activism and social science evaluation resulted in pulling the community together in an empowering process and how this particular community was characterized by cultural factors that were uniquely "predisposing" to the values and tenets of empowerment evaluation.


The four chapters comprising Part 4, "Theoretical and Philosophical Frameworks," build on the theory section introduced in the first chapter. As discussed, the chapter by Fawcett and colleagues more rigorously examines underlying theory, and a framework for empowerment evaluation activities is presented, along with a schematic diagram of the six processes of empowerment evaluation, both of which are useful for training and other pedagogical purposes. Also offered is a section on the limitations of empowerment evaluation, including problems of operationalizing inherently vague concepts in this area and basic questions about validity and sensitivity. Notwithstanding the rather skimpy case studies scattered throughout, this chapter offers an excellent annotated primer on empowerment evaluation. This chapter is followed by a chapter titled "Empowerment Evaluation at Federal and Local Levels" by Robert Yin, Shakeh Jackie Kaftarian, and Nancy Jacob that is noteworthy mainly because a full-fledged case study is presented in which empowerment evaluation is not only used but bolstered by a concern for quality and validity. The authors use Daniel Stufflebeam's criticism of Fetterman's 1993 speech as a point of departure to search for ways to effectively bridge empowerment evaluation with the Joint Committee on Standards for Educational Evaluation's Program Evaluation Standards.12 The remaining two chapters in this section are provocative essays on empowerment evaluation, perhaps less useful for those seeking applications and case studies, which in fact are included in Part 5, "Workshops, Technical Assistance, and Practice," in which four chapters present lessons learned from various applications of empowerment evaluation type approaches, including the Prevention Plus III Model (chapter 12) and the Plan Quality Index (chapter 14).


Taken together, the book's chapters build a case against the conventional paradigm of program evaluation&emdash;questioning the purpose and utility of more traditional approaches and integrating participatory and social action methods into an alternative system of evaluation. The authors make no attempt to debunk the classic approach to program evaluation, and to claim so would represent a serious misreading. The book does not represent an all-or-nothing position, in which the reader either embraces the model or is labeled an experimental design recidivist; the overall intent is to



Book Review 391

stimulate intellectual examination of the field and to present empowerment evaluation as a viable tool in the appropriate programmatic context. l have seen students of public health react strongly to the book, nearly evenly divided between favorable and unfavorable, and can, therefore, attest to its usefulness in stimulating vociferous debates about what evaluation is, what it should be, and how far from Campbell and Stanley13 we are willing to go. For that alone, l would recommend that the book be included as a supplemental text in evaluation courses at the master's level.

Joseph W. Brown, PhD
Assistant Professor
Department of Health Behavior and Health Education
School of Public Health
University of Michigan at Ann Arbor


References



1. Stufflebeam DL: Empowerment evaluation, objectivist evaluation, and the evaluation standards: Where the future of evaluation should not go and where it needs to go. Evaluation Practice, 15:321, 1994.


2. Wallerstein NB, Bernstein E: Introduction to community empowerment, participatory education, and health. Health Educ Q 21:141, 1994.


3. Lewin K: Action-research and minority problems. J Social Issues 2:34-46, 1946.


4. Mark MM, Shotland RL: Stakeholder-based evaluation and value judgments. Evaluation Review 9:605-626, 1985.


5. Brunner I, Guzman A: Participatory evaluation: A tool to assess projects and empower people, in Conner RF, Hendricks M (eds.): International Innovations in Evaluation Methodology: New Directions for Program Evaluation (Vol. 42). San Francisco, CA, Jossey-Bass, 1989, pp. 9-19.


6. Zimmerman MA: Empowerment theory: Psychological, organizational, and community levels of analysis, in Rappaport J, Seldman E (eds.): Handbook of Community Psychology. New York, Plenum, in press.


7. Papineau D, Kiely MC: Participatory evaluation: Empowering stakeholders in a community economic development organization. Community Psychologist 27(2):56-57, 1996.


8. Berk RA, Rossi PH: Thinking About Program Evaluation. Newbury Park, CA, Sage, 1990, p.7.


9. Cook TD, Campbell DT: Quasi-Experimentation: Design & Analysis Issues for Field Settings. Boston, Houghton Mifflin, 1979, pp. 30-36.


10. Patton MQ: Developmental evaluation. Evaluation Practice 15:312, 1994.


11. Clark NM, McLeroy KR: Creating capacity through health education: What we know and what we don't. Health Educ Q 22:273-289, 1995.


12. Joint Committee on Standards for Educational Evaluation: The Program Evaluation Standards. Thousand Oaks, CA, Sage, 1994.


13. Campbell DT, Stanley JC: Experimental and Quasi-Experimental Designs for Research. Chicago, Rand McNally College Publishing, 1963.