Evaluation

Last updated

In common usage, evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realisable concept/proposal, or any alternative, to help in decision-making; or to generate the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed. [1]

Contents

The primary purpose of evaluation, in addition to gaining insight into prior or existing initiatives, is to enable reflection and assist in the identification of future change. [2] Evaluation is often used to characterize and appraise subjects of interest in a wide range of human enterprises, including the start of a period of months

Definition

Evaluation is the structured interpretation and giving of meaning to predicted or actual impacts of proposals or results. It looks at original objectives, and at what is either predicted or what was accomplished and how it was accomplished. So evaluation can be formative, that is taking place during the development of a concept or proposal, project or organization, with the intention of improving the value or effectiveness of the proposal, project, or organisation. It can also be summative, drawing lessons from a completed action or project or an organisation at a later point in time or circumstance. [3]

Evaluation is inherently a theoretically informed approach (whether explicitly or not), and consequently any particular definition of evaluation would have been tailored to its context the theory, needs, purpose, and methodology of the evaluation process itself. Having said this, evaluation has been defined as:

Purpose

The main purpose of a program evaluation can be to "determine the quality of a program by formulating a judgment" Marthe Hurteau, Sylvain Houle, Stéphanie Mongiat (2009). [6] An alternative view is that "projects, evaluators, and other stakeholders (including funders) will all have potentially different ideas about how best to evaluate a project since each may have a different definition of 'merit'. The core of the problem is thus about defining what is of value." [5] From this perspective, evaluation "is a contested term", as "evaluators" use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research.

There are two functions considering to the evaluation purpose. Formative Evaluations provide the information on improving a product or a process. Summative Evaluations provide information of short-term effectiveness or long-term impact for deciding the adoption of a product or process. [7]

Not all evaluations serve the same purpose some evaluations serve a monitoring function rather than focusing solely on measurable program outcomes or evaluation findings and a full list of types of evaluations would be difficult to compile. [5] This is because evaluation is not part of a unified theoretical framework, [8] drawing on a number of disciplines, which include management and organisational theory, policy analysis, education, sociology, social anthropology, and social change. [9]

Discussion

However, the strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to a mainstream audience but this adherence will work towards preventing evaluators from developing new strategies for dealing with the myriad problems that programs face. [9] It is claimed that only a minority of evaluation reports are used by the evaluand (client) (Datta, 2006). [6] One justification of this is that "when evaluation findings are challenged or utilization has failed, it was because stakeholders and clients found the inferences weak or the warrants unconvincing" (Fournier and Smith, 1993). [6] Some reasons for this situation may be the failure of the evaluator to establish a set of shared aims with the evaluand, or creating overly ambitious aims, as well as failing to compromise and incorporate the cultural differences of individuals and programs within the evaluation aims and process. [5] None of these problems are due to a lack of a definition of evaluation but are rather due to evaluators attempting to impose predisposed notions and definitions of evaluations on clients. The central reason for the poor utilization of evaluations is arguably[ by whom? ] due to the lack of tailoring of evaluations to suit the needs of the client, due to a predefined idea (or definition) of what an evaluation is rather than what the client needs are (House, 1980). [6] The development of a standard methodology for evaluation will require arriving at applicable ways of asking and stating the results of questions about ethics such as agent-principal, privacy, stakeholder definition, limited liability; and could-the-money-be-spent-more-wisely issues.

Standards

Depending on the topic of interest, there are professional groups that review the quality and rigor of evaluation processes.

Evaluating programs and projects, regarding their value and impact within the context they are implemented, can be ethically challenging. Evaluators may encounter complex, culturally specific systems resistant to external evaluation. Furthermore, the project organization or other stakeholders may be invested in a particular evaluation outcome. Finally, evaluators themselves may encounter "conflict of interest (COI)" issues, or experience interference or pressure to present findings that support a particular assessment.

General professional codes of conduct, as determined by the employing organization, usually cover three broad aspects of behavioral standards, and include inter-collegial relations (such as respect for diversity and privacy), operational issues (due competence, documentation accuracy and appropriate use of resources), and conflicts of interest (nepotism, accepting gifts and other kinds of favoritism). [10] However, specific guidelines particular to the evaluator's role that can be utilized in the management of unique ethical challenges are required. The Joint Committee on Standards for Educational Evaluation has developed standards for program, personnel, and student evaluation. The Joint Committee standards are broken into four sections: Utility, Feasibility, Propriety, and Accuracy. Various European institutions have also prepared their own standards, more or less related to those produced by the Joint Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and regard for the general and public welfare. [11]

The American Evaluation Association has created a set of Guiding Principles for evaluators. [12] The order of these principles does not imply priority among them; priority will vary by situation and evaluator role. The principles run as follows:

Independence is attained through ensuring independence of judgment is upheld such that evaluation conclusions are not influenced or pressured by another party, and avoidance of conflict of interest, such that the evaluator does not have a stake in a particular conclusion. Conflict of interest is at issue particularly where funding of evaluations is provided by particular bodies with a stake in conclusions of the evaluation, and this is seen as potentially compromising the independence of the evaluator. Whilst it is acknowledged that evaluators may be familiar with agencies or projects that they are required to evaluate, independence requires that they not have been involved in the planning or implementation of the project. A declaration of interest should be made where any benefits or association with project are stated. Independence of judgment is required to be maintained against any pressures brought to bear on evaluators, for example, by project funders wishing to modify evaluations such that the project appears more effective than findings can verify. [10]

Impartiality pertains to findings being a fair and thorough assessment of strengths and weaknesses of a project or program. This requires taking due input from all stakeholders involved and findings presented without bias and with a transparent, proportionate, and persuasive link between findings and recommendations. Thus evaluators are required to delimit their findings to evidence. A mechanism to ensure impartiality is external and internal review. Such review is required of significant (determined in terms of cost or sensitivity) evaluations. The review is based on quality of work and the degree to which a demonstrable link is provided between findings and recommendations. [10]

Transparency requires that stakeholders are aware of the reason for the evaluation, the criteria by which evaluation occurs and the purposes to which the findings will be applied. Access to the evaluation document should be facilitated through findings being easily readable, with clear explanations of evaluation methodologies, approaches, sources of information, and costs incurred. [10]

Furthermore, the international organizations such as the I.M.F. and the World Bank have independent evaluation functions. The various funds, programmes, and agencies of the United Nations has a mix of independent, semi-independent and self-evaluation functions, which have organized themselves as a system-wide UN Evaluation Group (UNEG), [13] that works together to strengthen the function, and to establish UN norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which endeavors to improve development evaluation standards. [15] The independent evaluation units of the major multinational development banks (MDBs) have also created the Evaluation Cooperation Group [16] to strengthen the use of evaluation for greater MDB effectiveness and accountability, share lessons from MDB evaluations, and promote evaluation harmonization and collaboration.

Perspectives

The word "evaluation" has various connotations for different people, raising issues related to this process that include; what type of evaluation should be conducted; why there should be an evaluation process and how the evaluation is integrated into a program, for the purpose of gaining greater knowledge and awareness? There are also various factors inherent in the evaluation process, for example; to critically examine influences within a program that involve the gathering and analyzing of relative information about a program.

Michael Quinn Patton motivated the concept that the evaluation procedure should be directed towards:

Founded on another perspective of evaluation by Thomson and Hoffman in 2003, it is possible for a situation to be encountered, in which the process could not be considered advisable; for instance, in the event of a program being unpredictable, or unsound. This would include it lacking a consistent routine; or the concerned parties unable to reach an agreement regarding the purpose of the program. In addition, an influencer, or manager, refusing to incorporate relevant, important central issues within the evaluation

Approaches

There exist several conceptually distinct ways of thinking about, designing, and conducting evaluation efforts. Many of the evaluation approaches in use today make truly unique contributions to solving important problems, while others refine existing approaches in some way.

Classification of approaches

Two classifications of evaluation approaches by House [17] and Stufflebeam and Webster [18] can be combined into a manageable number of approaches in terms of their unique and important underlying principles.[ clarification needed ]

House considers all major evaluation approaches to be based on a common ideology entitled liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual and empirical inquiry grounded in objectivity. He also contends that they are all based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which "the good" is determined by what maximizes a single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist/pluralist, in which no single interpretation of "the good" is assumed and such interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologiesphilosophies for obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic; in general, it is used to acquire knowledge that can be externally verified (intersubjective agreement) through publicly exposed methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic and is used to acquire new knowledge based on existing personal knowledge, as well as experiences that are (explicit) or are not (tacit) available for public inspection. House then divides each epistemological approach into two main political perspectives. Firstly, approaches can take an elite perspective, focusing on the interests of managers and professionals; or they also can take a mass perspective, focusing on consumers and participatory approaches.

Stufflebeam and Webster place approaches into one of three groups, according to their orientation toward the role of values and ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually is and might bethey call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an objectthey call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of an objectthey call this true evaluation.

When the above concepts are considered simultaneously, fifteen evaluation approaches can be identified in terms of epistemology, major perspective (from House), and orientation. [18] Two pseudo-evaluation approaches, politically controlled and public relations studies, are represented. They are based on an objectivist epistemology from an elite perspective. Six quasi-evaluation approaches use an objectivist epistemology. Five of them—experimental research, management information systems, testing programs, objectives-based studies, and content analysis—take an elite perspective. Accountability takes a mass perspective. Seven true evaluation approaches are included. Two approaches, decision-oriented and policy studies, are based on an objectivist epistemology from an elite perspective. Consumer-oriented studies are based on an objectivist epistemology from a mass perspective. Two approaches—accreditation/certification and connoisseur studies—are based on a subjectivist epistemology from an elite perspective. Finally, adversary and client-centered studies are based on a subjectivist epistemology from a mass perspective.

Summary of approaches

The following table is used to summarize each approach in terms of four attributes—organizer, purpose, strengths, and weaknesses. The organizer represents the main considerations or cues practitioners use to organize a study. The purpose represents the desired outcome for a study at a very general level. Strengths and weaknesses represent other attributes that should be considered when deciding whether to use the approach for a particular study. The following narrative highlights differences between approaches grouped together.

Summary of approaches for conducting evaluations
ApproachAttribute
OrganizerPurposeKey strengthsKey weaknesses
Politically controlledThreatsGet, keep or increase influence, power or money.Secure evidence advantageous to the client in a conflict.Violates the principle of full & frank disclosure.
Public relations Propaganda needsCreate positive public image.Secure evidence most likely to bolster public support.Violates the principles of balanced reporting, justified conclusions, & objectivity.
Experimental research Causal relationshipsDetermine causal relationships between variables.Strongest paradigm for determining causal relationships.Requires controlled setting, limits range of evidence, focuses primarily on results.
Management information systems Scientific efficiencyContinuously supply evidence needed to fund, direct, & control programs.Gives managers detailed evidence about complex programs.Human service variables are rarely amenable to the narrow, quantitative definitions needed.
Testing programsIndividual differencesCompare test scores of individuals & groups to selected norms.Produces valid & reliable evidence in many performance areas. Very familiar to public.Data usually only on testee performance, overemphasizes test-taking skills, can be poor sample of what is taught or expected.
Objectives-basedObjectivesRelates outcomes to objectives.Common sense appeal, widely used, uses behavioral objectives & testing technologies.Leads to terminal evidence often too narrow to provide basis for judging the value of a program.
Content analysis Content of a communicationDescribe & draw conclusion about a communication.Allows for unobtrusive analysis of large volumes of unstructured, symbolic materials.Sample may be unrepresentative yet overwhelming in volume. Analysis design often overly simplistic for question.
Accountability Performance expectationsProvide constituents with an accurate accounting of results.Popular with constituents. Aimed at improving quality of products and services.Creates unrest between practitioners & consumers. Politics often forces premature studies.
Decision-orientedDecisionsProvide a knowledge & value base for making & defending decisions.Encourages use of evaluation to plan & implement needed programs. Helps justify decisions about plans & actions.Necessary collaboration between evaluator & decision-maker provides opportunity to bias results.
Policy studies Broad issuesIdentify and assess potential costs & benefits of competing policies.Provide general direction for broadly focused actions.Often corrupted or subverted by politically motivated actions of participants.
Consumer-orientedGeneralized needs & values, effectsJudge the relative merits of alternative goods & services.Independent appraisal to protect practitioners & consumers from shoddy products & services. High public credibility.Might not help practitioners do a better job. Requires credible & competent evaluators.
Accreditation / certification Standards & guidelinesDetermine if institutions, programs, & personnel should be approved to perform specified functions.Helps public make informed decisions about quality of organizations & qualifications of personnel.Standards & guidelines typically emphasize intrinsic criteria to the exclusion of outcome measures.
Connoisseur Critical guidepostsCritically describe, appraise, & illuminate an object.Exploits highly developed expertise on subject of interest. Can inspire others to more insightful efforts.Dependent on small number of experts, making evaluation susceptible to subjectivity, bias, and corruption.
Adversary Evaluation "Hot" issuesPresent the pro & cons of an issue.Ensures balances presentations of represented perspectives.Can discourage cooperation, heighten animosities.
Client-centeredSpecific concerns & issuesFoster understanding of activities & how they are valued in a given setting & from a variety of perspectives.Practitioners are helped to conduct their own evaluation.Low external credibility, susceptible to bias in favor of participants.
Note. Adapted and condensed primarily from House (1978) and Stufflebeam & Webster (1980). [18]

Pseudo-evaluation

Politically controlled and public relations studies are based on an objectivist epistemology from an elite perspective.[ clarification needed ] Although both of these approaches seek to misrepresent value interpretations about an object, they function differently from each other. Information obtained through politically controlled studies is released or withheld to meet the special interests of the holder, whereas public relations information creates a positive image of an object regardless of the actual situation. Despite the application of both studies in real scenarios, neither of these approaches is acceptable evaluation practice.

Objectivist, elite, quasi-evaluation

As a group, these five approaches represent a highly respected collection of disciplined inquiry approaches. They are considered quasi-evaluation approaches because particular studies legitimately can focus only on questions of knowledge without addressing any questions of value. Such studies are, by definition, not evaluations. These approaches can produce characterizations without producing appraisals, although specific studies can produce both. Each of these approaches serves its intended purpose well. They are discussed roughly in order of the extent to which they approach the objectivist ideal.

Objectivist, mass, quasi-evaluation

Objectivist, elite, true evaluation

Objectivist, mass, true evaluation

Subjectivist, elite, true evaluation

Subject, mass, true evaluation

Client-centered

Methods and techniques

Evaluation is methodologically diverse. Methods may be qualitative or quantitative, and include case studies, survey research, statistical analysis, model building, and many more such as:

See also

Related Research Articles

Project management is the process of leading the work of a team to achieve all project goals within the given constraints. This information is usually described in project documentation, created at the beginning of the development process. The primary constraints are scope, time, and budget. The secondary challenge is to optimize the allocation of necessary inputs and apply them to meet pre-defined objectives.

<span class="mw-page-title-main">Risk management</span> Identification, evaluation and control of risks

Risk management is the identification, evaluation, and prioritization of risks followed by coordinated and economical application of resources to minimize, monitor, and control the probability or impact of unfortunate events or to maximize the realization of opportunities.

<span class="mw-page-title-main">Software architecture</span> High level structures of a software system

Software architecture is the set of structures needed to reason about a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.

Program evaluation is a systematic method for collecting, analyzing, and using information to answer questions about projects, policies and programs, particularly about their effectiveness and efficiency.

Policy analysis or public policy analysis is a technique used in the public administration sub-field of political science to enable civil servants, nonprofit organizations, and others to examine and evaluate the available options to implement the goals of laws and elected officials. People who regularly use policy analysis skills and techniques on the job, particularly those who use it as a major part of their job duties are generally known by the title policy analyst. The process is also used in the administration of large organizations with complex policies. It has been defined as the process of "determining which of various policies will achieve a given set of goals in light of the relations between the policies and the goals."

A feasibility study is an assessment of the practicality of a project or system. A feasibility study aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained.

Business analysis is a professional discipline focused on identifying business needs and determining solutions to business problems. Solutions may include a software-systems development component, process improvements, or organizational changes, and may involve extensive analysis, strategic planning and policy development. A person dedicated to carrying out these tasks within an organization is called a business analyst or BA.

The Joint Committee on Standards for Educational Evaluation is an American/Canadian based Standards Developer Organization (SDO). The Joint Committee, created in 1975, represents a coalition of major professional associations formed in 1975 to develop evaluation standards and improve the quality of standardized evaluation. The Committee has thus far published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988 and updated in 2008, The Program Evaluation Standards was published in 1994, and The Student Evaluation Standards was published in 2003.

<span class="mw-page-title-main">Internal audit</span> Independent, objective assurance and consulting activity

Internal auditing is an independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control and governance processes. Internal auditing might achieve this goal by providing insight and recommendations based on analyses and assessments of data and business processes. With commitment to integrity and accountability, internal auditing provides value to governing bodies and senior management as an objective source of independent advice. Professionals called internal auditors are employed by organizations to perform the internal auditing activity.

The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary.

A needs assessment is a systematic process for determining and addressing needs, or "gaps", between current conditions and desired conditions or "wants".

Audio equipment testing is the measurement of audio quality through objective and/or subjective means. The results of such tests are published in journals, magazines, whitepapers, websites, and in other media.

Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones. In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants' well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, "a comparison between what actually happened and what would have happened in the absence of the intervention." Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.

Transformative assessment is a form of assessment that uses “institution-wide assessment strategies that are based on institutional goals and implemented in an integrated way for all levels to systematically transform teaching and learning.” Transformative assessment is focused on the quality of the assessment instruments and how well the assessment measures achieving of a goal. "The classic approach is to say, if you want more of something, measure it"

A glossary of terms relating to project management and consulting.

Participatory development (PD) seeks to engage local populations in development projects. Participatory development has taken a variety of forms since it emerged in the 1970s, when it was introduced as an important part of the "basic needs approach" to development. Most manifestations of public participation in development seek "to give the poor a part in initiatives designed for their benefit" in the hopes that development projects will be more sustainable and successful if local populations are engaged in the development process. PD has become an increasingly accepted method of development practice and is employed by a variety of organizations. It is often presented as an alternative to mainstream "top-down" development. There is some question about the proper definition of PD as it varies depending on the perspective applied.

Small-scale project management is the specific type of project management of small-scale projects. These projects are characterised by factors such as short duration; low person hours; small team; size of the budget and the balance between the time committed to delivering the project itself and the time committed to managing the project. They are otherwise unique, time delineated and require the delivery of a final output in the same way as large-scale projects.

Empowerment evaluation (EE) is an evaluation approach designed to help communities monitor and evaluate their own performance. It is used in comprehensive community initiatives as well as small-scale settings and is designed to help groups accomplish their goals. According to David Fetterman, "Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination". An expanded definition is: "Empowerment evaluation is an evaluation approach that aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs."

Writing center assessment refers to a set of practices used to evaluate writing center spaces. Writing center assessment builds on the larger theories of writing assessment methods and applications by focusing on how those processes can be applied to writing center contexts. In many cases, writing center assessment and any assessment of academic support structures in university settings builds on programmatic assessment principles as well. As a result, writing center assessment can be considered a branch of programmatic assessment, and the methods and approaches used here can be applied to a range of academic support structures, such as digital studio spaces.

Decision quality (DQ) is the quality of a decision at the moment the decision is made, regardless of its outcome. Decision quality concepts permit the assurance of both effectiveness and efficiency in analyzing decision problems. In that sense, decision quality can be seen as an extension to decision analysis. Decision quality also describes the process that leads to a high-quality decision. Properly implemented, the DQ process enables capturing maximum value in uncertain and complex scenarios.

References

  1. Staff (1995–2012). "2. What Is Evaluation?". International Center for Alcohol Policies - Analysis. Balance. Partnership. International Center for Alcohol Policies. Archived from the original on 2012-05-04. Retrieved 13 May 2012.
  2. Sarah del Tufo (13 March 2002). "WHAT is evaluation?". Evaluation Trust. The Evaluation Trust. Archived from the original on 30 April 2012. Retrieved 13 May 2012.
  3. Michael Scriven (1967). "The methodology of evaluation". In Stake, R. E. (ed.). Curriculum evaluation. Chicago: Rand McNally. American Educational Research Association (monograph series on evaluation, no. 1.
  4. Ross, P.H.; Ellipse, M.W.; Freeman, H.E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks: Sage. ISBN   978-0-7619-0894-4.
  5. 1 2 3 4 5 Reeve, J; Paperboy, D. (2007). "Evaluating the evaluation: Understanding the utility and limitations of evaluation as a tool for organizational learning". Health Education Journal. 66 (2): 120–131. doi:10.1177/0017896907076750. S2CID   73248087.
  6. 1 2 3 4 Hurteau, M.; Houle, S.; Mongiat, S. (2009). "How Legitimate and Justified are Judgments in Program Evaluation?". Evaluation. 15 (3): 307–319. doi:10.1177/1356389009105883. S2CID   145812003.
  7. Staff (2011). "Evaluation Purpose". designshop – lessons in effective teaching. Learning Technologies at Virginia Tech. Archived from the original on 2012-05-30. Retrieved 13 May 2012.
  8. Alkin; Ellett (1990). not given. p. 454.
  9. 1 2 Potter, C. (2006). "Psychology and the art of program evaluation". South African Journal of Psychology. 36 (1): 82HGGFGYR–102. doi:10.1177/008124630603600106. S2CID   145698028.
  10. 1 2 3 4 5 6 7 8 David Todd (2007). GEF Evaluation Office Ethical Guidelines (PDF). Washington, DC, United States: Global Environment Facility Evaluation Office. Archived from the original (PDF) on 2012-03-24. Retrieved 2011-11-20.
  11. Staff (2012). "News and Events". Joint Committee on Standards for Educational Evaluation. Joint Committee on Standards for Educational Evaluation. Archived from the original on October 15, 2009. Retrieved 13 May 2012.
  12. Staff (July 2004). "AMERICAN EVALUATION ASSOCIATION GUIDING PRINCIPLES FOR EVALUATORS". American Evaluation Association. American Evaluation Association. Archived from the original on 29 April 2012. Retrieved 13 May 2012.
  13. 1 2 3 Staff (2012). "UNEG Home". United Nations Evaluation Group. United Nations Evaluation Group. Archived from the original on 13 May 2012. Retrieved 13 May 2012.
  14. World Bank Institute (2007). "Monitoring & Evaluation for Results Evaluation Ethics What to expect from your evaluators" (PDF). World Bank Institute. The World Bank Group. Archived (PDF) from the original on 1 November 2012. Retrieved 13 May 2012.
  15. Staff. "DAC Network On Development Evaluation". OECD - Better Policies For Better Lives. OECD. Archived from the original on 2 June 2012. Retrieved 13 May 2012.
  16. Staff. "Evaluation Cooperation Group". Evaluation Cooperation Group website. ECG. Archived from the original on 13 June 2006. Retrieved 31 May 2013.
  17. House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher. 7(3), 4-12.
  18. 1 2 3 Stufflebeam, D. L., & Webster, W. J. (1980). "An analysis of alternative approaches to evaluation" Archived 2016-11-09 at the Wayback Machine . Educational Evaluation and Policy Analysis . 2(3), 5-19. OCLC   482457112