Adversary evaluation

Last updated

An adversary evaluation approach in policy analysis is one which reflects a valuing orientation. [1] This approach developed in response to the dominant objectifying approaches in policy evaluation [2] and is based on the notions that: 1) no evaluator can be truly objective, and, 2) no evaluation can be value-free. [3] To this end, the approach makes use of teams of evaluators who present two opposing views (these teams are commonly referred to as adversaries and advocates). These two sides then agree on issues to address, collect data or evidence which forms a common database, and present their arguments. A neutral party is assigned to referee the hearing, and is expected to arrive at a fair verdict after consideration of all the evidence presented. [4]

Contents

There are many different models for adversary evaluations, including judicial, congressional hearing and debate models. However, models which subscribe to a legal-framework are most prominent in the literature. [5]

The legal/judicial model

The judicial evaluation model is an adaptation of legal procedures for an evaluative framework. Unlike legal adversary hearings, the objective of this approach is not to win, but rather to provide a comprehensive understanding of the program in question. [2] [4] [5] This model assumes that it is impossible for an evaluator not to have a biasing impact. Therefore, the focus of these evaluations shifts from scientific justification to public accountability. [2] Multiple stakeholders are involved, and this approach aims at informing both the public, and those involved in the evaluation about the object of evaluation. While the model is flexible, it usually incorporates a hearing, prosecution, defence, a jury, charges and rebuttals. [3] Dependent upon the evaluation in question, this model may also incorporate pre-trial conferences, direct questioning and redirected questions, and summaries by prosecution and defence (Owens, 1973). [1] Proponents of this model, however, stress the importance of carefully adapting the model to the environment in which it is deployed, and the policy which it intends to address.

Procedure

While flexibility is encouraged when implementing an adversary evaluation, some theorists have attempted to identify the stages of specific adversary models.

Wolf (1979) [2] and Thurston, [6] propose the following four stages for a judicial evaluation:

1. The issue generation stage
At this stage, a broad range of issues are identified. Thurston [6] recommends that issues which reflect those perceived by a variety of persons involved in, or affected by the program in question, are taken under consideration in the preliminary stages.
2. The issue selection stage
This stage consists of issue-reduction. Wolf (1979) [2] proposes that issues on which there is no debate, should be eliminated. Thurston [6] states that this reduction may involve extensive analysis (inclusive of content, logic and inference). The object of debate should also be defined and focused during this stage (Wolf, 1979). [2]
3. The preparation of arguments stage
This stage consists of data collection, locating relevant documents and synthesising available information. The data or evidence collected should be relevant to the for and against arguments to be deployed in the hearing (Wolf, 1979). [2] [6]
4. The hearing stage itself
This stage may also be referred to as the clarification forum and involves public presentation of the object of debate (Wolf, 1979). [2] This is followed by the presentation of evidence and panel or jury deliberation. [2] [6]

Owens (1973) [2] provides a more detailed description of the hearing stage in an advocate-adversary setting. He attributes the following characteristics to this aspect of the model (list adapted from Crabbe & Leroy, p. 129):

Benefits

The following are identified as benefits of using an adversarial approach:

  1. Due to the public nature of the evaluation, openness and transparency regarding the object of evaluation is encouraged. [2]
  2. As the model takes into account multiple forms of data (inclusive of statistical fact, opinions, suppositions, values and perceptions), it is argued to do justice to the complex social reality which forms part of the evaluation (Wolf, 1975). [4] [2] [6]
  3. The judicial nature of this approach may reduce political controversy surrounding an object of evaluation. [2]
  4. As both sides of an argument are presented, the risks of tactical withholding of information should be minimised. [4]
  5. This approach allows for the incorporation of a multitude of perspectives, this should promote a more holistic evaluation (Wolf, 1975, 1979). [4]
  6. The presentation of pro and con evidence and a platform which allows for cross-examination, permits public access to various interpretations of the evidence introduced into the evaluative context (Wolf, 1975). [4]
  7. The presentation of rival hypotheses and explanations may enhance both quantitative and qualitative approaches (Yin, 1999). [4]
  8. All data must be presented in an understandable and logical way in order to persuade the jury. Dependent on the jury in question, this can make the data presented more accessible to the public and other stakeholders involved in the evaluation. [6]
  9. Finally, this approach is suitable for meta-evaluation and may be combined with other approaches which are participatory or expertise-oriented. [4]

Limitations

According to Smith (1985), [4] many of the limitations of this approach relate to its competitive nature, the complexity of the process, and the need for skilled individuals willing to perform the various roles needed for a hearing. Listed are the main limitations of the adversary evaluation:

  1. This form of evaluation may provoke venomous debate and conflict may have a negative impact on the outcome of the evaluation. [2]
  2. The focus of the evaluation may shift to assigning blame or guilt, rather than optimising policy. [5]
  3. As adversary-advocate models are conflict-based, possibilities for reaching an agreeable outcome are curtailed. [2]
  4. Key stakeholders are not always equally skilled, and articulate individuals are placed at an advantage. [2]
  5. This method can be time-consuming and expensive (Owens, 1973). [4] [2]
  6. It is sometimes difficult for hearing members to develop specific, operational recommendations (Wolf, 1979). [4]
  7. Time-limitations may only allow for a narrow focus. [4]

Applications

Although currently out of favour, this approach has been used quite extensively in the field of educational evaluation (Owens, 1973). [4] It has also been applied to ethnographic research (Schensul, 1985) [4] and the evaluation of state employment agencies (Braithwaite & Thompson, 1981). [4]

Crabbe and Leroy [2] contend that an adversary approach to evaluation should be beneficial when:

  1. The program being evaluated may affect a large group of people;
  2. when the issue in question is one of controversy and public attention;
  3. when the parties involved realise and accept the power of a public trial;
  4. when the object of evaluation is well-defined and amenable to polarised positions;
  5. in contexts in which judges are likely to be perceived as neutral, and;
  6. when there are sufficient time and monetary resources available for the method.

Criticisms

Popham and Carlson [7] proposed that adversary evaluation was flawed based on the following six points:

  1. Disparity in adversary abilities
  2. Fallible judges
  3. Excessive confidence in the usefulness of the model
  4. Difficulty in framing issues
  5. Potential for the manipulation of results
  6. Excessive cost

Popham and Carlson, [7] however, were in turn criticised by others in the field. Gregg Jackson [8] argues that these criticisms do a "gross injustice" (p. 2) to adversary evaluation. He proposes that the only valid criticism amongst those listed is "difficulty in framing issues" (p. 2), stating that the other points are unfair, untrue or exaggerated. He further noted that Popham and Carlson [7] seemed to hold adversary evaluation to a higher or different standard to other forms of evaluation. Thurston [6] argues in line with Jackson, [8] but proposes two alternative criticisms of adversary evaluation. He states that issue definition and the use of the jury pose major problems for this approach.

Finally, Worthen [5] notes that at present there is little more than personal preference which determines which type of evaluation will best suite a program. Crabbe and Leroy [2] caution that all evaluations should be approached with regard to their unique needs and goals, and adjusted and implemented accordingly; there is unlikely to be one approach which satisfies the needs of all programs.

See also

Related Research Articles

The adversarial system or adversary system is a legal system used in the common law countries where two advocates represent their parties' case or position before an impartial person or group of people, usually a judge or jury, who attempt to determine the truth and pass judgment accordingly. It is in contrast to the inquisitorial system used in some civil law systems where a judge investigates the case.

Evaluation is a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realisable concept/proposal, or any alternative, to help in decision-making; or to ascertain the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed. The primary purpose of evaluation, in addition to gaining insight into prior or existing initiatives, is to enable reflection and assist in the identification of future change.

Public policy is a course of action created and/or enacted, typically by a government, in response to public, real-world problems. Beyond this broad definition, public policy has been conceptualized in a variety of ways.

Software development is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.

A modeling language is any artificial language that can be used to express information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure.

Systems development life cycle Systems engineering term

In systems engineering, information systems and software engineering, the software development life cycle (SDLC), also referred to as the application development life-cycle, is a process for planning, creating, testing, and deploying an information system. The systems development life cycle concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency. In both the public and private sectors, stakeholders often want to know whether the programs they are funding, implementing, voting for, receiving or objecting to are producing the intended effect. While program evaluation first focuses around this definition, important considerations often include how much the program costs per participant, how the program could be improved, whether the program is worthwhile, whether there are better alternatives, if there are unintended outcomes, and whether the program goals are appropriate and useful. Evaluators help to answer these questions, but the best way to answer the questions is for the evaluation to be a joint project between evaluators and stakeholders.

Policy analysis is a technique used in public administration to enable civil servants, activists, and others to examine and evaluate the available options to implement the goals of laws and elected officials. The process is also used in the administration of large organizations with complex policies. It has been defined as the process of "determining which of various policies will achieve a given set of goals in light of the relations between the policies and the goals."

Gregg v. Georgia, Proffitt v. Florida, Jurek v. Texas, Woodson v. North Carolina, and Roberts v. Louisiana, 428 U.S. 153 (1976), reaffirmed the United States Supreme Court's acceptance of the use of the death penalty in the United States, upholding, in particular, the death sentence imposed on Troy Leon Gregg. Referred to by a leading scholar as the July 2 Cases and elsewhere referred to by the lead case Gregg, the Supreme Court set forth the two main features that capital sentencing procedures must employ in order to comply with the Eighth Amendment ban on "cruel and unusual punishments". The decision essentially ended the de facto moratorium on the death penalty imposed by the Court in its 1972 decision in Furman v. Georgia 408 U.S. 238 (1972).

Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented.

Object-oriented analysis and design (OOAD) is a technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality.

Public participation, also known as citizen participation, is the inclusion of the public in the activities of any organization or project. Public participation is similar to but more inclusive than stakeholder engagement.

Public engagement is a term that has recently been used, particularly in the UK, to describe "the involvement of specialists listening to, developing their understanding of, and interacting with, non-specialists".

Participatory development communication is the use of mass media and traditional, inter-personal means of communication that empowers communities to visualise aspirations and discover solutions to their development problems and issues.

Natural resource management

Natural resource management (NRM) is the management of natural resources such as land, water, soil, plants and animals, with a particular focus on how management affects the quality of life for both present and future generations (stewardship).

Participatory development

Participatory development (PD) seeks to engage local populations in development projects. Participatory development has taken a variety of forms since it emerged in the 1970s, when it was introduced as an important part of the "basic needs approach" to development. Most manifestations of public participation in development seek "to give the poor a part in initiatives designed for their benefit" in the hopes that development projects will be more sustainable and successful if local populations are engaged in the development process. PD has become an increasingly accepted method of development practice and is employed by a variety of organizations. It is often presented as an alternative to mainstream "top-down" development. There is some question about the proper definition of PD as it varies depending on the perspective applied. Two perspectives that can define PD are the "Social Movement Perspective" and the "Institutional Perspective":

Opportunity management (OM) has been defined as "a process to identify business and community development opportunities that could be implemented to sustain or improve the local economy".

An evaluability assessment (EA) is a qualitative investigation employed before a programme is evaluated.

Software architecture description is the set of practices for expressing, communicating and analysing software architectures, and the result of applying such practices through a work product expressing a software architecture.

Systemic intervention is a deliberate operation by intervening agents that seeks people to make alterations in their lives in psychology. This analyses how people deal with challenges in the contemporary era, including their power relations and how they reform relationship with others. Midgley ventured new approach to systems philosophy and social theory that could develop variety usage of the multiple strands of systemic thinking to systemic intervention. Scientific methods could be used as a segment of the intervention practice. However, it does not deal with all of the problems of systemic thinking as well as the science complexity.

References

  1. 1 2 Alkin, M.A. & Christie, C. A. (2004). An evaluation theory tree. In M.C. Alkin (Ed), Evaluation roots: tracing theorist's views and influences (12–63). CA: Sage.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Crabbe, A. & Leroy, P. (2008). The handbook of environmental policy evaluation. London: Earthscan.
  3. 1 2 Hogan, R. (2007). The historical development of program evaluation. Online Journal of Workforce Education and Development, 2(4)
  4. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Miller, R. L. & Butler, J. (2008). Using an adversary hearing to evaluate the effectiveness of a Military program. The Qualitative Report, 13 (1), 12–25.
  5. 1 2 3 4 Worthen, B. (1990). Program evaluation. In H. Walberg & G. Haertel (Eds.), The international encyclopedia of educational evaluation (42–47). Toronto, ON: Pergammon Press.
  6. 1 2 3 4 5 6 7 8 Thurston, P. (1978). Revitalizing adversary evaluation: deep dark deficits or muddled mistaken musings. Educational Researcher, 7(7), 3–8.
  7. 1 2 3 Popham, W. J. & Carlson, D. (1977). Deep dark deficits of the adversary evaluation model. Educational Researcher, 6(6), 3–6.
  8. 1 2 Jackson, G. (1977). Adversary evaluation: sentenced to death without a fair trial. Educational Researcher, 6(10), 2–18.