Monitoring and Evaluation

 

MONITORING AND EVALUATION

ONETrack International’s Transition to Home program maximizes the welfare of orphaned children around the world and placing them in the care of existing family members. Monitoring and Evaluation is a crucial component of making sure that children receive the care they deserve and that we as an organization are having the impact that we set out to provide.

MONITORING

Definition of Monitoring

Monitoring is a continuous process of information gathering (quantitative and qualitative data) in order to track the progress of the implementation of the project by using indicators to identify whether the goals of the project have been achieved. This allows the program manager to establish if the program is moving towards achieving the main objectives. By utilizing the data collected, a project’s strengths and weakness can be identified and therefore the limitations that arises out of the implementation of the project can be resolved as the project develops.

 

Types of Monitoring

There are a few types of monitoring methods that can be included in the M&E process:

1. Process Monitoring:

This involves field visits to the location where the project is being implemented. A joint development of a checklist of outputs, inputs and activities (to be identified during these visits)  by the management/stakeholders and project staff can be used by the M&E staff during these visits. The checklist can be used to determine if the project is close to the attainment of its goals. After which, any gaps identified can be flagged to the management/ stakeholders and the project staff to ensure that they are addressed.

2. Assumption Monitoring:

Assumptions are stated to ensure that factors outside of the control of the project are taken into account. Therefore their influences on the project should be considered towards the development of the project since these assumptions can determine the project’s successes or failures. Assumption monitoring measures these external factors that are not within the control of the project and may be contributing factors in explaining why a project has succeeded or failed. For instance, one of the long term goals of ONETrack International is to ensure that the orphans who have transitioned to be under the care of their extended family members are able to have access to education that would eventually lead them to attaining tertiary education. However, this may not happen due to factors such as the interest of the child and family problems that are out of the scope of ONETrack International’s capacity.

3. Financial Monitoring:

Financial monitoring has several uses. One of the most obvious reasons is to ensure that the project stays within the planned budget set out during the planning stage of the project. It is also important for the organization to be financially accountable and to therefore, maintain transparency to stakeholders regarding the use of these funds. Lastly, financial efficiency can be measured to track which activities are under resourced or over resourced and therefore to perhaps channel these resources to other activities where they may be needed.

4. Impact Monitoring: 

This type of monitoring involves the continual assessment of the impacts produced from the project to the respective target populations. The path to achieving the long term goals of the ONETRACK International is dependent upon the systematic attainment of short term goals. The task of impact monitoring is applicable primarily to those long term goals as there is a need to measure whether the intended impacts of the projects have been produced and is on the way to achieving the main long term goals. These impacts can be monitored through the established  impact indicators. It should also be noted that both positive and negative impacts should be taken into consideration to also identify impacts that were not identified and therefore not intended to be achieved prior to the project initiation.

To establish the monitoring process, three main elements are used:

  • Project inputs
  • Performance
  • Progress

 

Monitoring Criteria

A widely accepted set of monitoring criteria adopted in the M&E procedures of various agencies is the SMART criteria. SMART is used to set the goals and objectives of the project and these are used as a principle guide for indicator formulation for an effective results based monitoring. This section aims to set out the components in this criteria. As each component can be interpreted and defined differently by various organizations according to the context of the project they are implementing, the components stated here represents that of which ONETrack International is utilizing in the monitoring process of our Transition to Home program:

1. Specific:

The indicator must be formulated to capture the quantitative and qualitative information that is necessary to reflect whether a specific objective has been met and not any other objective. This is so that it can be ascertained if the changes are happening are due to the implementation of the project or otherwise. Hence, it is important that the objectives of the project are set out clearly at the start of the project. Stakeholders and project managers should also have a common understanding of what these indicators are supposed to capture.

2. Measurable: 

Firstly, the indicator must be feasible to quantify and the data should be gathered in a pragmatic manner for the indicators to measure changes that can be verified objectively. Secondly, these indicators must also be measurable and formulated in such a way that two different people would measure them in the same way irrespective of the time when the data is collected to ensure consistency across contexts. If qualitative data is being collated for the purpose of indicator formulation, then the definition of the qualitative data must be clearly defined so that there are no different interpretations or misinterpretations of the meaning of the indicator. Lastly, the sensitivity of these indicators should be taken into consideration so that they would be able to measure any tweaks in the actions of the program that would affect the results.

3. Achievable and Attainable: 

In order for the indicators to be deemed achievable, the target outcome should accurately specify the component that is to be measured by the indicator and therefore, these outcomes must be realistic enough to be achieved. Additionally, the indicator should be able to attribute any changes measured by the indicator to the implementation of the project itself. Hence, these changes should be anticipated prior to the implementation of the project so that they can be identified upon its implementation.

4. Relevant and Realistic: 

The indicators must be realistic in a sense that the way the data is collected must be practical, reflect the expectations of parties involved and utilize sources that are already available to the project manager. If the costs or skills associated with the data collection are obstacles, then those indicators may not be accurate if the information cannot even be obtained. This may have a knock on effect on other criteria in the logical pathway as they may be compromised due to the inability of the indicator to measure what it is supposed to measure. If information that is required to be measured by the indicators is difficult to obtain, the output would not be able to be measured accurately and therefore we would not be able to ascertain if the short term goal has been achieved. This is important because, as mentioned in the previous section, there is a multi-step process where the short term goals have to be achieved first before the long term goals can be obtainable. This follows then that the indicators should relate to those long term goals for it to be meaningful and show that the short term outputs have a related impact.

This leads to the criteria that the indicators must be relevant. For an indicator to be relevant, it must be able to establish a relationship between what the indicator measures and the theories that led to the development of the short and long term outcomes. For instance, one of the short term goals is the attainment of children attending school for a higher number of years whereas the long term goal of this is to see more of them obtaining better jobs after aging out of the program. ‘Better’ jobs in this case can mean high-skilled labor. A suitable indicator for this could be to measure the number of years the children have attended school or received education. The theory behind this is that the longer children are in school, the more likely that they would be educated up till the tertiary level which enables them to fulfill high skilled labor jobs that may require a tertiary degree as a prerequisite.

5. Time Bound: 

The time factor needs to be considered in the formulation of indicators because it may be considered a constraint. Such scenarios include but are not limited to: The time spent on data collection while efficiently using the resources available to do so must be reflected in these indicators.  The time lag incurred between the actual output and expected output needs to be considered. A time frame indicating when the indicators would measure the output.

 

EVALUATION

Definition of Evaluation

According to UNICEF, evaluation can be defined as a process utilized to determine the relevance, effectiveness, efficiency and impact of actions taken to attain specific goals in a systematic and objective way. It is therefore a tool that is used to improve both current activities and future planning, programing and decision-making. One can think of it as a tool for improving current and future processes in the program implemented. 

 

Evaluation Criteria

Based on the definition of evaluation, we can then evaluate the success of the project by first stating the criteria used to construct the evaluation. Criteria that could be used to guide the appraisal of the program includes, but are not limited to

  • Relevance: The value of this program should be evaluated with regards to whether it addresses the needs of the stakeholders (i.e. the orphans involved, the organizations that have implemented the program) and whether the UN Millennium Goals and various UN Conventions have been met during the implementation of the program. For instance, for an organization that has chosen to adopt the program, are the needs of the orphans in line with the values of the organization after implementing the program and are these in line with the international and local regulation requirements with regards to child adoption laws and child rights?
  • Efficiency: Has the program achieved the intended results while utilizing resources available in the most economical way possible?
  • Effectiveness: Was the program output satisfactory as compared to the expectations set?
  • Impact: The various outcomes, both negative or positive, should be analyzed. Other perspectives such as the economic, social, political effects on individuals, communities, societies and the impact on the national level should be taken into consideration as well.  
  • Sustainability: What is the likelihood of this program continuing its operations independently in the various organizations once support from ONETrack International has been withdrawn? Can this likelihood be measured? Additionally, would this program be emulated in other organizations in the region?

 

Evaluation Report

An evaluation report should contain the following:

  • Findings: What are the facts (quantitative and qualitative) that could be ascertained from the outcome of this program?
  • Conclusions: A general statement that relates to the findings section. Causal relationships can be stated here as well. 
  • Recommendations: Suggestions on how to improve the processes in the future and to be more thorough and specific, to also include suggestions of improvement in particular situations with specific contexts and circumstances. 
  • Lessons learned: Lessons can be drawn from the conclusion section which is generally stated to include the lessons that would have a wider impact on the community, society, nation, region and perhaps even the international community. 

 

Types of Evaluation

There are two broad categories of evaluation which are elaborated below: formative and summative evaluation. 

1. Formative Evaluation:

Formative evaluation is almost similar to the monitoring segment because it is implemented during the program’s development in order to establish the direction of the program and identify methods that can be improved to achieve the program’s outcome. Some difficulties that can be experienced from this evaluation includes transforming the outcomes of this evaluation into innovative solutions and therefore improvements to the program. There are two segments that can compromise formative evaluation and they are elaborated below:

  • Needs Assessment:

Is used to determine the needs of the organization by identifying the any gaps that hinders the program from achieving its goal. These gaps includes but is not limited to those in knowledge, practices or skills. Firstly, the alignment of expectations of both the organization and program outcomes should be laid out. 

  • Process Evaluation:

Process evaluation is used to determine and measure the direct outputs (i.e. outreach, number of children successfully enrolled in school etc.) of the program implemented and whether the intended outcomes were achieved. By utilizing the outcomes, improvements with regards to the processes of implementation can be ascertained. Such evaluation can be conducted throughout the lifetime of the program

2. Summative Evaluation

The Summative evaluation encompasses two types of evaluation: outcome evaluation and impact evaluation. These evaluations should be conducted once the program is implemented and is useful to know if the goals of the program are being achieved. One can therefore decide, based on the achievements of the program up while it is still being implemented, whether it should still continue its expansion or to be discontinued. The outcomes of these evaluations can thus be used to justify the expansion or discontinuation of the program: 

  • Outcome Evaluation:

Outcome evaluation measures the effects of the program on the target population by assessing how much progress has been made to address the problems that the program seeks to address due to the implementation of the program. The outcomes to be measured should be changes observed in the short term and medium term that results from the implementation of the program in terms of changes in perhaps attitude towards child institutionalization, etc. The outcome component from the logic model can be used to design this part of the evaluation

  • Impact Evaluation:

Impact evaluation measures how the program affects long term outcomes, whether these effects are intended or not. These impacts should encompass the overall effects on the community, organization, society and environment. In order to conduct the impact evaluation, there is a need to establish what was the situation before the program was implemented, i.e. comparing the differences in outcomes with a group of children where the program was not implemented against a group of children where the program was implemented. Ultimately, through impact evaluation, objectives of lesson-learning and accountability can be realized.  

 

Purpose of Evaluation

Evaluation is usually conducted towards the end of the program or even during the midpoint of the program’s progress. Therefore, evaluation can be used to:

  • Improve the efficiency and effectiveness of the strategies outlined to be implemented in the program. 
  • Allowing the coordination between different target groups to be made transparent in order to ensure that the tasks of these respective groups are completed and whether it fits the overall agenda to accomplish the goals laid out. 
  • To allow accountability to be made possible regarding the target group’s contribution. 
  • To provide information on whether the strategies outlined are sustainable in the long run and therefore, the program can be sustained in the future. 

In order to allow these objectives to be realized, it is important to ensure that stakeholders are well informed to make the appropriate choices, greater team work with partners, ensure that commitments or goals have been met (short and long terms), honor your team’s work and show relevant stakeholders within the community why they should continue to remain invested in the vision and goals of this program

SCOPE AND PURPOSE OF M&E

Review of Logic Model

A  logic model is a developed at the beginning of the implementation of ONETrack International’s Transition to Home program. It is designed to ensure that the expectations and objectives of the program are aligned across stakeholders and program managers. 

 

Identify Expectations

The expectations and needs of internal and external stakeholders should be ascertained to ensure understanding, ownership and the use of data collected is in line across the board. Therefore, a clear understanding of stakeholders’ priorities and how they are affected by the program’s implementation is essential. Besides their priorities, their constraints should also be clarified so that the goals stated can address both these priorities and constraints. These objectives should also be modified according to the local context so that these goals are feasible and therefore the program is credible and would be accepted in the respective communities. 

Examples of key stakeholders and informational needs:

  • Communities
  • Beneficiaries
  • Donors
  • Project/program management
  • Partners (bilateral or local)
  • Government and local authorities 

 

Scope of major M&E Events and Functions

The scope of major M&E events and functions refers to the scale and complexity of the monitoring and evaluation of the program. The M&E process can range from one that is relatively simple and being reliant on internal resources and capacities to one that is highly complex with differing activities and requiring expertise and resources. There are factors that affect the complexity of the M&E processes which are (but not limited to) the following: 

  • Number of outcomes 
  • Type of outcomes
  • Scale of the intended impact
  • Geographic scale (i.e. accessibility to the program areas)
  • Demographic scale (i.e. specific target populations and their accessibility)
  • Time frame
  • Available human resources and budget.

In order to determine the overall complexity of your M&E system, you should identify key activities that you would like to have throughout the entire process to give an overview of the system and any additional components like further plans for funding, additional technical expertise required that you would like to include along the way.

 

Monitoring and Development Process

There are twelve main components of the entire monitoring and evaluation system that are required for it to function efficiently and effective:

  • Organizational structures with M&E functions
  • Human capacity for M&E
  • Partnerships to plan, coordinate and manage the M&E system
  • M&E frameworks
  • M&E workplan and costs
  • Advocacy, communication and culture for M&E
  • Routine program monitoring
  • Surveys and surveillance
  • National and sub-national databases
  • Supportive supervision and data auditing
  • Evaluation and research
  • Data dissemination and use

 

REFERENCE MODELS

Logic Model

The logic model was developed at the beginning of the implementation of ONETrack International’s Transition to Home program so that all stakeholders, partners and program managers can understand goals, expectations and processes of the program and the way it is conducted throughout its lifecycle. It is thus a reference point for those involved in the program implementation. This model is meant to show the processes involved in the specific pathway taken during the program in order to achieve its objectives. Therefore, this model aims to identify the issues of this program to improve it and to omit items that are no longer relevant as the program progresses to reach its objectives. The dynamic relationship between resources, activities and goals are then reflected in the model. 

The components of the model includes the input, output, expected outcomes, assumptions, means of verification, indicators and other external factors involved during the lifecycle of the program along with illustrating the assumptions, rationales and purposes of this program. These would be discussed in greater detail below. A description of the main problem at hand and the target audience of the program should also be mentioned. It should also be noted that the logic model should be amended as the program progresses. 

1. Inputs

It is essential to include the resources required for the development of the program to budget the necessary resources before its commencement. These resources can include, but are not limited to, the contributions, staff needed and investments that are needed to go into the program. There could already be resources that are available to the organization and/or resources that are yet to be acquired. These can be categorised into a ‘wish list’ of items and can include intangible items such as connections that need to be built with the relevant people in the community, region and country. 

2. Outputs

The outputs produced by the implementation of the program include the activities, services, events, and products that reach the program’s target beneficiaries. Outputs list the end product of the organization’s program and they represent a sign of progress to achieve the program’s goals. Unlike outcomes, outputs are not utilized to measure the value of the impact or value of the program to the beneficiaires. With regards to ONETrack International’s Transition to Home program, outputs can include the number of children who have transitioned from the care of orphanages into the care of their families and/or relatives. They can also include the number of children who have received medical care and education, etc. 

3. Outcomes

The outcomes of the program represent the benefits received by the beneficiaries due to the outputs of the program and serves as indicators of the success of the processes adopted by the program. It is essential to differentiate between the outputs and outcomes components of the logic model. Therefore, while the outputs focuses on how the goals of the program are being achieved, the outcomes would focus on what the program is trying to achieve. In the case of the Transition to Home program, the output of more children transitioning from orphanages into the care of their families and relatives has resulted in a short term outcome of lower population of children inside orphanages. 

Outcomes can be categorized into short, mid and long term goals. The inputs and outputs described should systematically lead to the achievement of the short term and mid term outcomes while the long term outcomes should ultimately lead to the solution to the overall problem. In order to determine if the long term outcomes has led to a meaningful and better situation as compared to the situation before the implementation of the program, a baseline should be established on the targeted areas before the commencement of the program, i.e. how many children in orphanages had access to clean drinking water before the implementation of the program, etc. 

4. Assumptions

The aim of assumptions being made is to ensure that the relevant elements are already in place before rolling out the program. Assumptions are usually identified before the inputs are listed and they are important to allow for identification of factors that are outside of the control of the program if it fails.

The assumptions made must be made on the condition that they are:

  • Outside of the project’s control and they must exist or take place for the program to be successful. 
  • These conditions may be actions of certain groups, or project stakeholders. 
  • Certain economic or social conditions, such as the absence of conflict. 
  • Political conditions, such as stability.  
  • Conditions of climate.

5. Indicators

Indicators serve as data to be collected to be used as evidence to measure the effectiveness of the outcomes of the program by determining whether the performance standards set for the program have been reached. They can measure any intangible changes in situations or groups and tangible products produced from the activities. The data collected has to be compared to the baseline data to measure the changes experienced due to the implementation of the program. Indicators need to be based on a few factors:

  • Independent: they measure only the objective, purpose or result to which they are linked 
  • Factual: they are based on factual measurement
  • Plausible: it must be believable that they are measuring the change attributed to the project  
  • Objectively verifiable: we can verify whether they have been achieved

 

Theory of Change

The Theory of Change (ToC) represents the various pathways that can be taken to address the causes of the problems outlined (i.e. institutionalized children), identify solutions and guide the program manager to make decisions as the program progresses in order to achieve the objectives of the Transition to Home program. These can include pathways that are not even directly related to the program. The ToC is meant to be a flexible instrument, as opposed to the rigid logic framework, to be used to plan and monitor the measures taken during the program,  taking into account the conditions of uncertainty of specific contexts. It also takes into account the underlying assumptions and risks that can be revisited as the program unfolds to ensure that the program is on the right path of achieving the desired goals. However, its function is similar to that of the logic model in the sense that they are both required to provide accountability and transparency to stakeholders involved in the processes. 

1. Methodology

In order to develop the ToC, its methodology should be outlined to understand the components required to create it. This includes:

  • Desired Change: 

The change that the program aims to reach should represent a convergence of relationships, conditions and results that the program manager would like to achieve due to the action of the prograe in the current and/or future contexts. The kind of change that we would like to see in the orphan sector would influence the specific sector that the program manager would be focusing on when implementing the program, i.e. the temporal, relational, structural, geographic, social, cultural, economic, political, institutional dimensions. Therefore, it is likely that the Transition to Home program would focus on the social, cultural and institutional aspect of society to ensure that the culture of the community is retained by ensuring that the orphans are transitioned into the care of their families. It also addresses the social aspect by ensuring that these orphans have their health care and educational needs met.

  • Underlying Assumptions: 

Assumptions should be able to connect the outcomes with the conditions, that the program will be dependent on, that has to be identified prior to the implementation of the program and establish the risks involved when outlining them. In other words, the assumptions laid out should be relevant to the achievement of the outcomes while accounting for other factors that are out of our control. These assumptions should also be able to explain why these outcomes would be able to lead to the desired change we want out of this program. The flexibility of the ToC is such that we are able to revisit these assumptions and modify them when they are no longer realistic or reasonable in achieving our targets. If these assumptions are not defined clearly, it would be difficult to determine what factors the success of the programme would rely on. 

  • Change Indicators: 

The indicators of change in the ToC framework are not the same as the output indicators used in the logical framework.  These indicators of change should be formulated in a way that would allow us to understand the context at that moment and the effects that we can gain from our interventions in that context. In this way, the indicators would put us in a position that would allow us to understand how the changes we have made are happening or not and what are our contributions to that change. Therefore, each outcome at every subsequent level has their own respective indicators of change. The relative importance of the effects of the outcomes allows us to prioritise the indicators of change accordingly to track and monitor the relevance of these outcomes. The aim of these indicators is to ensure that the actions we implement have the desired effects produced from the outcomes obtained in order to know if the interventions are done not for the sake of implementing them. When constructing these indicators, we should consider the degree to which our actions are contributing to the outcomes and ultimate goal. 

 

DEVELOPING INDICATORS

Indicators measure the project impacts, outcomes, outputs and inputs due to the implementation of the project to assess the progress of the program and compares the outcomes of the programme with a baseline so that the success of the program can be ascertained. It enables the establishment of the relationships between these factors to identify gaps in the project’s progress that may hinder the attainment of the project’s objectives

Core Indicators:

Core indicators are used to understand the inputs and outcomes of a program. 

Quantitative indicators:

Quantitative indicators measure the outputs of the program that can be easily defined, documented,  counted and compared. They comprise of numbers and facts that can be verified. 

  • Output indicators: Output indicators would measure whether the goals of the program are attained and adds more details on the program. However, output indicators mainly provide us with the information on whether the planning of the program has been executed but they are not able to measure the effects of these outputs. They can also fall under process indicators which measures the programs activities and outputs.
  • Outcome indicators: These indicators refer to the measurement of the results of the outcome of the program or rather the impact of the of the program. These indicators would be useful in determining the reasons for implementing the program in the first place and why certain actions were conducted, or not. 
  • Input indicators: Although it may be obvious that indicators should be tracked, it is important to determine if the amount of inputs that are utilized are also used at the right time.

Qualitative indicators:

These indicators are useful in assessing the success of the program where it is not easily quantifiable. These factors that are not easily quantifiable are used to answer the whys of different situations and the different contexts. This may, however, be useful because the implementation of the program involves the changing of the children’s lives and it may be insufficient to capture these changes via only numerical means. This is especially useful in measuring the impact and evaluating the long term effects of the program from the perspective of different groups of people and age groups, i.e. evaluating children’s rights with respect to the empowerment and development. For example, the future prospects that children feel by being given an opportunity to further their education. Additionally, these qualitative indicators would be significant in identifying any constraints to the implementation of the program and any hindrances to the success of the program which may not be apparent initially.  

 

DATA COLLECTION

Data collection is required to conduct the monitoring and evaluation process. However, when collecting data, there may be a conflict of interest or ethical considerations that may arise and has to be considered

When implementing this data collection methods, the following questions should be posed:

  • Would the data collection methods you are using (e.g. qualitative or quantitative tools or both) be sufficient to capture the diverse opinions of all participants regardless of their comfort levels with sharing personal information?
  • As the participants would be children, have the literacy and level of comprehension been considered in the design of your collection methods? You may want to consider alternative methods or a combination of them when collecting the data to address these possible obstacles. 
  • Is there a need to consider how to encourage the children/host families to express their insights especially in a context where it may not be conventional or comfortable for them to do so? And if so, is there a possibility that you would need to pose the same question in a different way? 
  • Have you considered if the questions you would be asking have been modified to suit the specific group of people you are asking?

Questions to consider when deciding how the data should be utilized:

  • How will the data derived from the M&E plan be used to update internal and external stakeholders of the progress of the program?
  • How will this data be used to assist internal stakeholders to tweak existing processes to ensure that the program is on course to being successful?
  • Ultimately, how would this data be useful in pushing the periphery of existing similar programs and therefore achieving the intended goals?

 

References

https://www.newstatesman.com/world/2017/03/world-must-wake-dangers-orphanages

https://www.globalgiving.org/pfil/10203/projdoc.pdf

https://www.thestar.com.my/news/world/2006/02/18/orphanages-stunt-growth-foster-care-better–study/

http://www.un.org/documents/ga/res/41/a41r085.htm

http://www.un.org/documents/ga/res/41/a41r085.htm

http://www.unaids.org/sites/default/files/sub_landing/files/2_MERG_Strengthening_Tool_12_Components_ME_System.pdf

https://evaluateblog.wordpress.com/2013/05/03/the-12-key-components-of-me-systems/

https://www.unicef.org/violencestudy/pdf/CP%20Manual%20-%20Stage%206.pdf

https://evaluateblog.wordpress.com/2013/07/02/types-of-monitoring-in-monitoring-and-evaluation-me/

http://www.mnestudies.com/monitoring/what-monitoring

https://evaluateblog.wordpress.com/2013/07/02/types-of-monitoring-in-monitoring-and-evaluation-me/

https://www.unicef.org/violencestudy/pdf/CP%20Manual%20-%20Stage%206.pdf

https://www.linkedin.com/pulse/20141022071803-18927814-a-good-start-with-s-m-a-r-t-indicators/

https://www.unitar.org/sites/default/files/uploads/pprs/monitoring-and-evaluation_revised_april_2017.pdf

http://library.cphs.chula.ac.th/Ebooks/ReproductiveHealth/A%20UNICEF%20Guide%20for%20Monitoring%20and%20Evaluation_Making%20a%20Difference.pdf

http://library.cphs.chula.ac.th/Ebooks/ReproductiveHealth/A%20UNICEF%20Guide%20for%20Monitoring%20and%20Evaluation_Making%20a%20Difference.pdf

https://www.unicef.org/evaluation/files/ME_PPP_Manual_2005_013006.pdf

https://www.unicef.org/evaluation/files/ME_PPP_Manual_2005_013006.pdf

https://cyfar.org/different-types-evaluation

https://www.cdc.gov/healthcommunication/pdf/evaluationplanning.pdf

https://aceproject.org/ace-en/topics/ve/veh/veh02

https://www.cdc.gov/healthcommunication/pdf/evaluationplanning.pdf

https://www.cdc.gov/healthcommunication/pdf/evaluationplanning.pdf

https://www.cdc.gov/std/program/pupestd/types%20of%20evaluation.pdf

https://blog.socialcops.com/academy/7-types-of-evaluation/

https://www.cdc.gov/healthcommunication/pdf/evaluationplanning.pdf

https://www.cdc.gov/std/program/pupestd/types%20of%20evaluation.pdf

http://www.oecd.org/dac/evaluation/dcdndep/37671602.pdf

http://masstapp.edc.org/prevention-planning/step-5-evaluation

http://eeas.europa.eu/archives/delegations/ethiopia/documents/eu_ethiopia/ressources/m_e_manual_en.pdf

http://eeas.europa.eu/archives/delegations/ethiopia/documents/eu_ethiopia/ressources/m_e_manual_en.pdf

https://undg.org/wp-content/uploads/2017/06/UNDG-UNDAF-Companion-Pieces-7-Theory-of-Change.pdf

http://www.democraticdialoguenetwork.org/app/documents/view/en/1811

http://siteresources.worldbank.org/BRAZILINPOREXTN/Resources/3817166-1185895645304/4044168-1186409169154/24pub_br217.pdf

http://www.infodev.org/infodev-files/resource/InfodevDocuments_286.pdf

http://www.genderevaluation.net/gem/en/gem_tool/step4a2.htm

http://www.emro.who.int/child-health/research-and-evaluation/indicators/All-Pages.html

https://www.cdc.gov/eval/indicators/index.htm

http://www.genderevaluation.net/gem/en/gem_tool/step4a2.htm

https://www.intrac.org/wpcms/wp-content/uploads/2016/06/Monitoring-and-Evaluation-Series-Programme-Indicators-9.pdf

https://www.intrac.org/wpcms/wp-content/uploads/2016/06/Monitoring-and-Evaluation-Series-Programme-Indicators-9.pdf

https://www.devex.com/news/indicators-logframe-and-m-and-e-system-78031

https://www.intrac.org/wpcms/wp-content/uploads/2016/06/Monitoring-and-Evaluation-Series-Programme-Indicators-9.pdf

https://pacificwomen.org/wp-content/uploads/2018/03/Monitoring-and-Evaluation-Toolkit-November-2017.pdf

https://pacificwomen.org/wp-content/uploads/2018/03/Monitoring-and-Evaluation-Toolkit-November-2017.pdf

https://www.thecompassforsbc.org/how-to-guides/how-develop-monitoring-and-evaluation-plan

 

Next:

Transparency and Accountability

 

About the author