Empowerment evaluation

Last updated

Empowerment evaluation (EE) is an evaluation approach designed to help communities monitor and evaluate their own performance. It is used in comprehensive community initiatives as well as small-scale settings and is designed to help groups accomplish their goals. According to David Fetterman, "Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination". [1] An expanded definition is: "Empowerment evaluation is an evaluation approach that aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs." [2]

Contents

Scope

Empowerment evaluation has been used in programs ranging from a fifteen million dollar Hewlett-Packard corporate philanthropy effort [3] to accreditation in higher education [4] and from the NASA Jet Propulsion Laboratory’s Mars Mars Rover project [5] to battered women's shelters. [6] Empowerment evaluation has been used by government, foundations, businesses, and non-profits, as well as Native American reservations. It is a global phenomenon, with projects and workshops around the world including Australia, Brazil, Canada, Ethiopia, Finland, Israel, Japan, Mexico, Nepal, New Zealand, South Africa, Spain, Thailand, the United Kingdom, and the United States. A sample of sponsors and clients includes Casey Family Programs, Centers for Disease Control and Prevention, Family & Children Services, Health Trust, Knight Foundation, Poynter, Stanford University, State of Arkansas, UNICEF and Volunteers of America. [7]

History and publications

Empowerment evaluation was introduced in 1993 by David Fetterman during his presidential address at the American Evaluation Association’s (AEA) annual meeting. [1]

The approach was initially well received by some researchers who commented on the complementary relationship between EE and community psychology, social work, community development and adult education. They highlighted how it inverted traditional definitions of evaluation, shifting power from the evaluator to program staff and participants. Early supporters positively noted the focus on social justice and self-determination. One colleague compared the writings of approach to Martin Luther's 95 Theses. [8] [9] [10]

Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability [11] the first empowerment evaluation book, provided an introduction to theory and practice. It highlighted EE's scope, ranging from its use in a national educational reform movement to its endorsement by the W. K. Kellogg Foundation’s Director of Evaluation. The book presented examples in various contexts, including: federal, state, and local government, HIV prevention and related health initiatives, African American communities and battered women’s shelters. This first volume also provided various theoretical and philosophical frameworks as well as workshop and technical assistance tools.

Foundations of Empowerment Evaluation, [12] was the second EE book. The book provided steps and cases. It highlighted the role of the Internet to facilitate and disseminate the approach.

The third book was titled: Empowerment evaluation principles in practice. [13] It emphasized greater conceptual clarity by making explicit EE's underlying principles, ranging from improvement and inclusion to capacity building and social justice. In addition, it highlighted its commitment to accountability and outcomes, by stating them as an explicit principle and presenting substantive outcome examples. Cases described include educational reform, youth development programs and child abuse prevention programs. [14]

Theories

The primary theories guiding empowerment evaluation are process use and theories of use and action. [15] [16] [17]

Process use represents much of the rationale or logic underlying EE in practice, because it cultivates ownership by placing the approach in community and staff members’ hands.

The alignment of theories of use and action explain how empowerment evaluation helps people produce desired results. [18] [19] [20] [21] [22] [23]

Process use

Empowerment evaluation is designed to be used by people. It places evaluation in the hands of community and staff members. The more that people are engaged in conducting their own evaluations the more likely they are to believe in them, because the evaluation findings are theirs. In addition, a byproduct of this experience is that they learn to think lucratively.[ clarification needed ] This makes them more likely to make decisions and take actions based on their evaluation data. This way of thinking is at the heart of process use. [24]

Principles

Empowerment evaluation is guided by 10 principles. [25] These principles help evaluators and community members align decisions with the larger purpose or goals associated with capacity building and self-determination.

  1. Improvement – help people improve program performance
  2. Community ownership – value and facilitate community control
  3. Inclusion – invite involvement, participation, and diversity
  4. Democratic participation – open participation and fair decision making
  5. Social justice – address social inequities in society
  6. Community knowledge – respect and value community knowledge
  7. Evidence-based strategies – respect and use both community and scholarly knowledge
  8. Capacity building – enhance stakeholder ability to evaluate and improve planning and implementation
  9. Organizational learning – apply data to evaluate and implement practices and inform decision making
  10. Accountability – emphasize outcomes and accountability.

Concepts

Key concepts include: critical friends, cultures of evidence, cycles of reflection and action, communities of learners, and reflective practitioners. [26] A critical friend, for example, is an evaluator who provide constructive feedback. [27] They help to ensure the evaluation remains organized, rigorous and honest.

Steps

EE's three-step approach includes: [28] [12]

  1. establish their mission;
  2. review their current status; and
  3. plan for the future.

This approach is popular in part due to its simplicity, effectiveness and transparency.

A second approach is the 10-step Getting to Outcomes (GTO).[ citation needed ] GTO helps participants answer 10 questions using relevant literature, methods and tools. The 10 accountability questions and literature to address them are:

  1. What are the needs and resources? (Needs assessment; resource assessment)
  2. What are the goals, target population and desired outcomes? (Goal setting)
  3. How does the intervention incorporate knowledge of science and best practices in this area? (Science and best practices)
  4. How does the intervention fit with existing programs? (Collaboration; cultural competence)
  5. What capacities do you need to implement a quality program? (Capacity building)
  6. How will this intervention be carried out? (Planning)
  7. How will the quality of implementation be assessed? (Process evaluation)
  8. How well did the intervention work? (Outcome and impact evaluation)
  9. How will quality improvement strategies be incorporated? (Total quality management; continuous quality improvement)
  10. If the intervention is (or components are) successful, how will the intervention be sustained? (Sustainability and institutionalization)

A manual with worksheets addresses how to answer the questions. [29] While GTO has been used primarily in substance abuse prevention, customized GTOs have been developed for preventing underage drinking [ citation needed ] and promoting positive youth development.[ citation needed ] Several books are downloadable. In addition, EE can employ photo journalism, online surveys, virtual conferencing and self-assessments.[ citation needed ]

Monitoring

Conventional and innovative evaluation tools monitor outcomes, including online surveys, focus groups and interviews, as well as the use of quasi-experimental designs. In addition, program specific metrics are developed, using baselines, benchmarks, goals and actual performance. For example, a minority tobacco prevention program in Arkansas established:

  1. Baselines (the number of tobacco users)
  2. Goals (the yearly number of subjects helped)
  3. Benchmarks (the monthly number of subjects helped)
  4. Performance (the number of subjects who stop smoking)

These metrics help the community monitor implementation, by comparing performance with benchmarks. It also enables them to make mid-course corrections.

Selected case examples

Stanford University School of Medicine applied the technique to curricular decision making. [26] EE contributed to improvements in course and clerkship[ clarification needed ] ratings. For example, the average student ratings for required courses improved significantly (P = .04; Student's one-sample t test).

EE guided Hewlett-Packard's $15 million Digital Village Initiative. The initiative was designed to help bridge the digital divide in communities of color. Outcomes ranged from Native American's building one of the largest unlicensed wireless systems in the country to creating a high-resolution digital printing press.[ citation needed ]

The State of Arkansas used EE in academically distressed schools and tobacco prevention. Outcomes include improving test scores, upgrading school-level performance and preventing and reducing tobacco consumption. [30]

A school district in South Carolina invested millions of their own dollars to provide each student with a personalized computing device as an educational tool. EE was used to support large scale implementation of the initiative and monitor outcomes associated with teacher and student behavior change. [31]

Rationale

Response to critique

EE is conducted by an internal group, not an external individual. Programs are dynamic, not static and thus require more fluid, responsive, and continual assessment. The evaluator becomes a coach, rather than the expert. Investigating worth and merit is not sufficient. The focus should also be on program improvement. Empowerment evaluation, as a group activity, builds in self-checks on bias. Internal and external forms of evaluation are compatible and reinforcing. However, the Joint Committee's standards were applied and empowerment evaluation was found to be consistent with the spirit of the standards. Empowerment evaluation is not a threat to traditional evaluation. It may instead help to revitalize it. [32]

Empowerment evaluation is part of an emancipatory research stream. Its unique contribution is its focus on fostering self-determination and building capacity. Empowerment evaluation is guided by process use. Additional effort could be made to further distinguish empowerment from collaborative, participatory, stakeholder, and utilization forms of evaluation. Empowerment evaluation should be limited or focused on the disenfranchised and issues of liberation. Empowerment evaluation has become a part of the evaluation landscape. [16]

Empowerment evaluation is part of a worldwide movement. It is now part of the evaluation field. However, empowerment evaluation needs to focus on the consumer, rather than staff members. In addition, the definition of empowerment evaluation has changed. Bias in evaluation can be removed by distancing oneself from the group or program being assessed. Internal and external forms of evaluation are needed. Empowerment evaluators serve as evaluation consultants. [33]

The definition of empowerment is the same as when the approach was first defined and introduced to the field. However, it has been expanded to further clarify the purpose of the approach. Fetterman and Wandersman agree that empowerment evaluation is part of an emancipatory stream of research. It also relies on process use to guide it. They also believe that greater effort is needed to further distinguish empowerment from other forms of stakeholder involved approaches. However, empowerment evaluation can be viewed along a continuum from less empowering to more empowering in nature. Empowerment evaluation is designed to help the disenfranchised. However, the boundaries are much broader and inclusive. Everyone can benefit from self-assessment and becoming more self-determined.
Fetterman advocated that evaluation be shared with a broader population. [1] [34]

Debates and controversy

Empowerment evaluation challenged the status quo concerning who is in control of an evaluation and what it means to be an evaluator. Conventionally, evaluations are conducted by a specialist. In EE, the group or community performs the evaluation, guided by an empowerment evaluator or “critical friend.”

First wave of criticism

Shufflebeam claimed that evaluation should be left in the hands of professionals who objectively investigate the worth or merit of an object and that EE violates the (as yet unadopted) Joint Committee's Program Evaluation Standards. [35] [36]

Fetterman and Scriven agreed on the value of both internal and external evaluations. They also agree on a focus on the consumer. However, staff members, sponsors, and policy makers also have important roles to play in evaluation. Scriven however claimed that the evaluator must maintain distance from program participants to avoid bias. [37] [38]

Chelimsky re-framed the discussion between Fetterman, Patton and Scriven, explaining that evaluations serve multiple purposes: (1) accountability; (2) development; and (3) knowledge. Scriven, and to a lesser extent Patton, focused on accountability, while Fetterman focused on development. [39]

Second wave

The second wave of debate and discussion emerged between 2005 and 2007. The primary critiques focused on conceptual and methodological clarity:

Cousins attempted to differentiate between similar approaches, e.g. collaborative, participatory, and empowerment evaluation. Cousins asked whether EE is practical (focusing on decision making), or transformative (focusing on self-determination) and viewed self-evaluation as more likely to have a self-serving bias. This critic noted the variability in attempts at empowerment evaluation. [40]

Miller and Campbell conducted a systematic literature review of empowerment evaluation. They highlighted types or modes of EE, as well as settings, reasons for use, selection process and degree of participation. The highlighted practice variants depending on the size of the evaluation. They suggested that clients were selecting it for appropriate reasons, such as capacity building, self-determination, accountability, cultivating ownership and institutionalization of evaluations. However, they also found that approximately 25% were empowerment in name only. In addition, they argued for additional conceptual clarity. [41]

Patton accepted EE as part of the evaluation field and proposed that given its established status, additional clarity distinguishing collaborative, participatory, utilization and empowerment evaluation would be fruitful. He acknowledged improvements ranging improved definitions and added the 10 principles. He was concerned that self-determination was not on the list. Patton applauded and recommended process use for empowerment evaluation. He accepted the contributors' commitment to forthrightly describing problems. Patton proposed greater emphasis on outcomes or results in EE. [42]

Scriven believes that self-evaluation is flawed, because it is inherently self-serving, and rejected its use for professional development. [43] He questioned the ability of EE to actually empower people and recommended a neutral evaluator role. He suggested that internal and external evaluations are not compatible. He also suggests that empowerment as well as randomized controls are merely forms of ideology. [44]

Response to critique

Fetterman and Wandersman responded by attempting to enhance conceptual clarity, provide greater methodological specificity and highlight EEs commitment to accountability and outcomes. They acknowledged and applauded Miller and Campbell's systematic review of EE projects, while noting neglected or omitted case examples and questioning some of their methodology.

They claimed that the 10 principles contributed to conceptual clarity and that people empower themselves. They asserted that evaluations are inherently subjective and are shaped by culture and political context, and that EE is committed to honesty and rigor. EE is more inclusive than traditional evaluations, placing cross-checks on data and decisions. Participants often know more about problems than outsiders and have a vested interest in making their programs work. They claimed that internal and external evaluations can operate together effectively as additional cross-checks.

While the similarities among collaborative, participatory and empowerment evaluation were described in the first and second empowerment evaluation books, they recommended Cousins' tool to highlight the differences, focusing on depth of participation and control of evaluation technical decision making[ citation needed ]

The most significant response to the critiques focused on outcomes. Fetterman & Wandersman argued that outcomes and results were important to EE. They highlighting specific project outcomes including:

Outcomes

  • CDC funded a study using a quasi-experimental design that demonstrated improved outcomes as a result of empowerment evaluation. [45]
  • Empowerment evaluation used in Arkansas distressed schools, increased standardized test scores.
  • Native American's built a wireless system and digital printing press supported by empowerment evaluation.
  • Stanford University's School of Medicine used EE to prepare for an accreditation site visit. Increases in student course ratings were statistically significant. [26]
  • Arkansas save millions in excess medical costs from applying empowerment evaluation to tobacco prevention programs. This resulted in legislation creating the Arkansas Evaluation Center. [46] [30]

Scriven's assessment

Scriven agreed that external evaluators sometimes miss problems obvious to program staff members. He also stated they have less credibility with them than an internal evaluator. As a result, he concluded, it is less likely their recommendations will be implemented. [47]

Scriven agreed that EE contributed to improvements in internal staff program evaluations and that empowerment evaluation could make a contribution to evaluation if combined with third-party evaluation. [48]

Professional association affiliation and awards

Empowerment evaluation was a catalyst for the creation of the American Evaluation Association's Collaborative, Participatory, and Empowerment Evaluation topical interest group. Approximately 20% of the American Evaluation Association membership is affiliated with the topical interest group. [49] SAGE Publications, a social science textbook publisher, cited an empowerment evaluation book as one of their "classic titles in research methods". [50] Four empowerment evaluators received honors from the association: Margret Dugan, David Fetterman, Shakeh Kaftarian, and Abraham Wandersman. [51]

Notes

  1. 1 2 3 Fetterman 1994.
  2. Wandersman et al. 2005.
  3. Fetterman 2012, pp. 98–107.
  4. Fetterman 2011.
  5. Fetterman & Bowman 2002.
  6. Andrews 1996.
  7. "videos". Archived from the original on 2011-09-27. Retrieved 2012-01-10.
  8. Altman 1997.
  9. Brown 1997.
  10. Wild 1997.
  11. Fetterman, Kaftarian & Wandersman 1996.
  12. 1 2 Fetterman 2001b.
  13. Fetterman & Wandersman 2004.
  14. See Donaldson, 2005 review of Empowerment evaluation principles in practice
  15. Argyris & Schon 1978.
  16. 1 2 Patton 1997a.
  17. Patton 1997b.
  18. Dunst, Trivette & LaPointe 1992.
  19. Zimmerman 2000.
  20. Zimmerman et al. 1992.
  21. Zimmerman & Rappaport 1988 See Bandura, 1982 concerning self-efficacy.
  22. Alkin & Christie 2004.
  23. Christie 2003.
  24. Patton 1997b, p. 189.
  25. Fetterman & Wandersman 2004, pp. 1–2, 27–41, 42–72.
  26. 1 2 3 Fetterman, Deitz & Gesundheit 2010.
  27. Fetterman 2009.
  28. Fetterman 2001a.
  29. Chinman, Imm & Wandersman 2004.
  30. 1 2 Fetterman & Wandersman 2007.
  31. Lamont, A., Wright, A., Wandersman, A, & Hamm, D. (2014). An empowerment evaluation approach to implementing with quality at scale. In Fetterman, Kaftarian, & Wandersman (Eds), Empowerment evaluation: Knowledge and tools for self assessment, evaluation capacity building, & accountability (2nd ed).
  32. Fetterman 1995.
  33. Scriven 1997.
  34. David M. Fetterman (2002-07-03). "Empowerment evaluation". Evaluation Practice. 15: 1–15. doi:10.1016/0886-1633(94)90055-8.
  35. "Program Evaluation Standards Statements « Joint Committee on Standards for Educational Evaluation". Jcsee.org. 2010-10-27. Retrieved 2013-01-27.
  36. Stufflebeam 1994.
  37. Fetterman 2010.
  38. Debate between Fetterman, Patton and Scriven is available online in text form from Journal of MultiDisciplinary Evaluation Archived 2012-07-15 at archive.today . It was also recorded and is available in Claremont's virtual library
  39. Fetterman 1997.
  40. Cousins 2004.
  41. Miller & Campbell 2006.
  42. Patton 2005.
  43. Scriven 2005.
  44. Smith 2007.
  45. Chinman et al. 2008.
  46. David Fetterman. "Arkansas Evaluation Center". Arkansasevaluationcenter.blogspot.com. Retrieved 2013-01-27.
  47. Scriven 1997, p. 12.
  48. Scriven 1997, p. 174.
  49. Rodríguez-Campos 2012.
  50. How SAGE has shaped Research Methods, p. 12. SAGE Publications
  51. Patton 1997a , p. 148 American Evaluation Association Award Recipients Archived 2012-01-14 at the Wayback Machine

Related Research Articles

In common usage, evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realisable concept/proposal, or any alternative, to help in decision-making; or to generate the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed.

Educational assessment or educational evaluation is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning. Assessment data can be obtained from directly examining student work to assess the achievement of learning outcomes or can be based on data from which one can make inferences about learning. Assessment is often used interchangeably with test, but not limited to tests. Assessment can focus on the individual learner, the learning community, a course, an academic program, the institution, or the educational system as a whole. The word 'assessment' came into use in an educational context after the Second World War.

Clinical psychology is an integration of human science, behavioral science, theory, and clinical knowledge for the purpose of understanding, preventing, and relieving psychologically-based distress or dysfunction and to promote subjective well-being and personal development. Central to its practice are psychological assessment, clinical formulation, and psychotherapy, although clinical psychologists also engage in research, teaching, consultation, forensic testimony, and program development and administration. In many countries, clinical psychology is a regulated mental health profession.

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.

Community psychology is concerned with the community as the unit of study. This contrasts with most psychology which focuses on the individual. Community psychology also studies the community as a context for the individuals within it, and the relationships of the individual to communities and society. Community psychologists seek to understand the functioning of the community, including the quality of life of persons within groups, organizations and institutions, communities, and society. They aim to enhance the quality of life through collaborative research and action.

Adaptive management, also known as adaptive resource management or adaptive environmental assessment and management, is a structured, iterative process of robust decision making in the face of uncertainty, with an aim to reducing uncertainty over time via system monitoring. In this way, decision making simultaneously meets one or more resource management objectives and, either passively or actively, accrues information needed to improve future management. Adaptive management is a tool which should be used not only to change a system, but also to learn about the system. Because adaptive management is based on a learning process, it improves long-run management outcomes. The challenge in using the adaptive management approach lies in finding the correct balance between gaining knowledge to improve management in the future and achieving the best short-term outcome based on current knowledge. This approach has more recently been employed in implementing international development programs.

<span class="mw-page-title-main">Narrative inquiry</span> Discipline within qualitative research

Narrative inquiry or narrative analysis emerged as a discipline from within the broader field of qualitative research in the early 20th century, as evidence exists that this method was used in psychology and sociology. Narrative inquiry uses field texts, such as stories, autobiography, journals, field notes, letters, conversations, interviews, family stories, photos, and life experience, as the units of analysis to research and understand the way people create meaning in their lives as narratives.

<span class="mw-page-title-main">Positive youth development</span>

Positive youth development (PYD) programs are designed to optimize youth developmental progress. This is sought through a positivistic approach that emphasizes the inherent potential, strengths, and capabilities youth hold.PYD differs from other approaches within youth development work in that it rejects an emphasis on trying to correct what is considered wrong with children's behavior or development, renouncing a problem-oriented lens. Instead, it seeks to cultivate various personal assets and external contexts known to be important to human development.

<span class="mw-page-title-main">Participatory action research</span> Approach to research in social sciences

Participatory action research (PAR) is an approach to action research emphasizing participation and action by members of communities affected by that research. It seeks to understand the world by trying to change it, collaboratively and following reflection. PAR emphasizes collective inquiry and experimentation grounded in experience and social history. Within a PAR process, "communities of inquiry and action evolve and address questions and issues that are significant for those who participate as co-researchers". PAR contrasts with mainstream research methods, which emphasize controlled experimentation, statistical analysis, and reproducibility of findings.

Health promotion is, as stated in the 1986 World Health Organization (WHO) Ottawa Charter for Health Promotion, the "process of enabling people to increase control over, and to improve their health."

Mastery learning is an instructional strategy and educational philosophy, first formally proposed by Benjamin Bloom in 1968. Mastery learning maintains that students must achieve a level of mastery in prerequisite knowledge before moving forward to learn subsequent information. If a student does not achieve mastery on the test, they are given additional support in learning and reviewing the information and then tested again. This cycle continues until the learner accomplishes mastery, and they may then move on to the next stage.

Formative assessment, formative evaluation, formative feedback, or assessment for learning, including diagnostic testing, is a range of formal and informal assessment procedures conducted by teachers during the learning process in order to modify teaching and learning activities to improve student attainment. The goal of a formative assessment is to monitor student learning to provide ongoing feedback that can help students identify their strengths and weaknesses and target areas that need work. It also helps faculty recognize where students are struggling and address problems immediately. It typically involves qualitative feedback for both student and teacher that focuses on the details of content and performance. It is commonly contrasted with summative assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.

Participatory GIS (PGIS) or public participation geographic information system (PPGIS) is a participatory approach to spatial planning and spatial information and communications management.

<span class="mw-page-title-main">Logic model</span> Method of depicting causal relationships

Logic models are hypothesized descriptions of the chain of causes and effects leading to an outcome of interest. While they can be in a narrative form, logic model usually take form in a graphical depiction of the "if-then" (causal) relationships between the various elements leading to the outcome. However, the logic model is more than the graphical depiction: it is also the theories, scientific evidences, assumptions and beliefs that support it and the various processes behind it.

Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, "a comparison between what actually happened and what would have happened in the absence of the intervention." Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.

In the United States education system, School Psychological Examiners assess the needs of students in schools for special education services or other interventions. The post requires a relevant postgraduate qualification and specialist training. This role is distinct within school psychology from that of the psychiatrist, clinical psychologist and psychometrist.

<span class="mw-page-title-main">Theory of Change</span> Methodology for social impact

Theory of Change (ToC) is a methodology or a criterion for planning, participation, adaptive management, and evaluation that is used in companies, philanthropy, not-for-profit, international development, research, and government sectors to promote social change. Theory of Change defines long-term goals and then maps backward to identify necessary preconditions.

<span class="mw-page-title-main">Holistic nursing</span> Medical care practice

Holistic nursing is a way of treating and taking care of the patient as a whole body, which involves physical, social, environmental, psychological, cultural and religious factors. There are many theories that support the importance of nurses approaching the patient holistically and education on this is there to support the goal of holistic nursing. The important skill to be used in holistic nursing would be communicating skills with patients and other practitioners. This emphasizes that patients being treated would be treated not only in their body but also their mind and spirit.. Holistic nursing is a nursing speciality concerning the integration of one's mind, body, and spirit with his or her environment. This speciality has a theoretical basis in a few grand nursing theories, most notably the science of unitary human beings, as published by Martha E. Rogers in An Introduction to the Theoretical Basis of Nursing, and the mid-range theory Empowered Holistic Nursing Education, as published by Dr. Katie Love. Holistic nursing has gained recognition by the American Nurses Association (ANA) as a nursing specialty with a defined scope of practice and standards. Holistic nursing focuses on the mind, body, and spirit working together as a whole and how spiritual awareness in nursing can help heal illness. Holistic medicine focuses on maintaining optimum well-being and preventing rather than just treating disease.

Participatory evaluation is an approach to program evaluation. It provides for the active involvement of stakeholder in the program: providers, partners, beneficiaries, and any other interested parties. All involved decide how to frame the questions used to evaluate the program, and all decide how to measure outcomes and impact. It is often used in international development.

Goal-free evaluation is any evaluation in which the evaluator conducts the evaluation without particular knowledge of or reference to stated or predetermined goals and objectives. This external evaluation model typically consists of an independent evaluator who is intentionally screened from the program's stated goals and objectives in hopes of reducing potential goal-related tunnel vision. According to Scriven, the logic behind avoiding stated goals and objectives has to do with "finding out what the program is actually doing without being cued as to what it is trying to do. If the program is achieving its stated goals and objectives, then these achievements should show up; if not, it is argued, they are irrelevant". The goal-free evaluator attempts to observe and measure all actual outcomes, effects, or impacts, intended or unintended, all without being cued to the program's intentions. As Popham analogizes, “As you can learn from any baseball pitcher who has set out in the first inning to pitch a shutout, the game’s final score is the thing that counts, not good intentions.”

References