.

Tuesday, May 5, 2020

Monitoring and Evaluation free essay sample

Question: Write an essay on monitoring and evaluation (ME) in which you: describe the difference between monitoring and evaluation distinguish between participatory ME and conventional ME describe how participatory monitoring and evaluation (PME) is used in practice with specific reference to case studies from China Title: Monitoring and Evaluation (ME) TABLE OF CONTENTS PAGE 1 Introduction3 2 The difference between monitoring and evaluation3 3 Distinguishing Participatory Monitoring and Evaluation from conventional Monitoring and Evaluation4 4 Participatory Monitoring and Evaluation (PME) in practice4-6 5 Conclusion7 6 Bibliography8 7 List of Sources8 MONITORING AND EVALUATION (ME) 1 Introduction How do we know if a project has achieved efficiency, been appropriate to its stakeholders and provided adequate benefit to as many stakeholders as possible? Has the project produced unwanted or beneficial side effects? Has new knowledge been generated and what learnings can we take from the project? Is the project or elements thereof replicable? This essay explores Monitoring and Evaluation (ME) as part of the project cycle. In order to provide context, it defines monitoring and evaluation and distinguishes between them. It then provides a critical discussion on the difference between Participatory Monitoring and Evaluation (PME) and conventional Monitoring and Evaluation (ME). Finally, it focuses on Participatory Monitoring and Evaluation and how it is used in practice with specific reference to a case study from China by Jianchu Qui Vernooy. 2 The difference between monitoring and evaluation Let us first define monitoring and evaluation in order to establish the context. Cloete and Rabie (2009:2) choose the following definition of evaluation by Mark, Greene Shaw: â€Å"Evaluation literally means ‘to work out the value (of something)’ in its Latin root ‘valere’. † Applied to the project cycle, Evaluation â€Å"looks at the bigger picture to make judgments about the worth of the entire project, within context. † (Cornwell, Modiga, Mokgupi, Plaatjie, Rakolojane, Stewart Treurnicht 2009:83) Evaluation answers the question: â€Å"Was our plan a good one? † (Cornwell et al 2006:83) Monitoring on the other hand is concerned more with the ongoing assessment of the project during implementation. According to Cornwell et al (2006:83) â€Å"Monitoring focuses on whether things are happening on time, within budget, and to standard. In essence, monitoring asks the question, â€Å"Did we follow our project design? †Ã¢â‚¬  Extrapolating from the text and from this writer’s Project Management experience, applied to the Results Based logic model, Monitoring is concerned with the assessment of inputs, activities and outputs, whilst Evaluation assesses outcomes and impacts. A key difference between Monitoring and Evaluation is thus the focus. Monitoring is typically focussed on the project implementation activities, which take place in the short to medium term, whilst Evaluation is typically focussed on the ‘big picture’ level of the medium to long term. Another difference is also who is typically concerned with the data produced by Monitoring and Evaluation. From this writer’s experience, the Project Manager, Project Team and Implementing Organisation are typically interested in the data produced by Monitoring. In terms of the data produced by Evaluation, this is the level of data that donors are particularly interested in. This information would also be of interest to project stakeholders. Whilst it is of interest to the Project Manager, Project Team and Implementing Organisation as a whole, Evaluations have traditionally been written for the donor as target audience. 3 Distinguishing Participatory Monitoring and Evaluation from conventional Monitoring and Evaluation Unlike the â€Å"conventional monitoring and evaluation system that works the best as a reporting system in the public sector† (Khan 2006:1), Participatory Monitoring and Evaluation is geared towards involving the project stakeholders in the Monitoring and Evaluation of the project. It would be an oversimplification to simply say that Participatory Monitoring and Evaluation is preferable and superior to conventional Monitoring and Evaluation. Cloete and Rabie (2009:7) propose a classification system for monitoring and evaluation approaches which uses three main classification categories, namely scope, philosophy and design. Such a classification system allows the Monitoring and Evaluation practitioner to select an appropriate type of ME depending on the project situation. That said, Participatory Monitoring and Evaluation does have definite merits when compared to conventional Monitoring and Evaluation. Cornwell et al (2009:86) summarize these merits as follows: â€Å"Basically, when done properly, participatory evaluation promotes empowerment, confidence, self-esteem and independence. † Let us draw out the key hallmarks of Participatory Monitoring and Evaluation outlined in the literature reviewed. Participatory Monitoring and Evaluation is planned together with the stakeholder group involved in the evaluation. They assist to set questions that are appropriate. The Participatory Monitoring and Evaluation is also conducted with stakeholder involvement. The involvement of project stakeholders in this participatory manner results in the development of the participants’ skills in terms of evaluation (capacity building) and also results in utility beyond just a final report, because the participants through their exposure to the project, become informed stakeholders who through the interactions established are empowered to better engage with the project team. A better understanding of the local context is also developed through Participatory Monitoring and Evaluation than through conventional Monitoring and Evaluation. There is a move from the conventional ME focus of accountability to the donor to shared accountability in Participatory Monitoring and Evaluation. To summarize, the key difference between Participatory Monitoring and Evaluation and conventional Monitoring and Evaluation is the participatory approach of Participatory Monitoring and Evaluation, whereby the evaluation team draws project stakeholders into the mix and the evaluator plays the role of facilitator or team leader. 4 Participatory Monitoring and Evaluation (PME) in practice Let us now focus on how Participatory Monitoring and Evaluation is used in practice with specific reference to a case study from China by Jianchu, Qui Vernooy. Jianchu, Qui Vernooy’s article (2009:388) â€Å"examines the capacity-building experiences of two research teams in Yunnan and Guizhou provinces in south-west China who used participatory monitoring and evaluation to strengthen their development research, particularly in the area of natural resource management. † Two groups of research teams of the Community-Based Natural Resource Management (CBNRM) programme of the International Development Research Centre (IDRC) had identified that they wanted to build their capacity in terms of Participatory Monitoring and Evaluation. The teams underwent training which was conducted simultaneously. This enabled them to utilize the Participatory Monitoring and Evaluation approach through fieldwork. The fieldwork took place during the course of the training. The discussion that follows will focus on the aspects of the Participatory Monitoring and Evaluation approach that are replicable in other projects and not on the specifics of the research into Natural Resource Management. The following techniques paraphrased from the case study of Jianchu et al (2009:390-2) were applied and are listed chronologically: Capacity building training in Participatory Monitoring and Evaluation was offered through three workshops. Research already done was the starting point from which the capacity building process proceeded. The intention was to add PME to ongoing research efforts. The workshops were offered at the same time as field work was undertaken. Participants from both teams attended the workshops together. This allowed for cross-pollination and knowledge sharing. The workshops covered some of the following content: â€Å"Key concepts, approach and basic questions related to PME. † (Jianchu et al 2009:391) â€Å"Exercises that involved identifying and discussing research gaps linked to the six PME questions (why? for whom? what? who? when? how? )† (Jianchu et al 2009:392) â€Å"In small groups, the most important of the identified gaps were debated, and suggestions were made for additional research work. † (Jianchu et al 2009:392) Providing feedback through a ‘market’ exercise whereby participants shared what they â€Å"would ‘buy’ (i. e. adopt) from each other, and what they would do differently. † (Jianchu et al 2009:392) True to the ethos of Participatory Monitoring and Evaluation, the way in which the workshops were conducted was participatory and geared towards creating shared understanding amongst participants. The knowledge gained through the workshops assisted the participants in terms of their Participatory Monitoring and Evaluation approach to the fieldwork. At the first workshop, a draft PME plan was generated by each team. The action plan answered the six PME questions (why? for whom? what? who? when? how? ) The fieldwork between workshop one and two involved an initial one-day workshop with project stakeholders (in this case farmers from two villages and township officials). By consulting the stakeholders, the research team was able to update their action plan to include indicators that were more appropriate to the stakeholders. At the second workshop, the results of the fieldwork and updated action plans were presented. During the second workshop, further information was generated by the participants through a process of asking questions. An improved plan was drawn up. For the fieldwork, the following methods and tools were used: â€Å"Focus group discussions, key-informant interviews and meetings were held to take feedback and discuss findings. † (Jianchu et al 2009:393) â€Å"The two PME teams used PRA tools such as resource mapping, focused group discussion, key informant interviewing, and ranking. † (Jianchu et al 2009:393) Gender sensitivity was observed during data analysis and women played an important role in providing information. Feedback about the fieldwork results was given to the stakeholders who participated in the fieldwork. The team introduced self-monitoring instruments in four villages. These were progressively improved with the support of the stakeholders from the villages. Self-monitoring booklets were distributed to the households involved in the self-monitoring. The booklets were designed to be simple and visual. Village meetings were instituted. These were held every three months with the purpose of assessing the situation at the time, collecting comments from the self-monitoring process and troubleshooting problems if they arose. At the third (final) workshop, the participants assessed the overall experience and looked at the way forward on the project and organisational levels. Results: The following results were attributed to the participatory evaluation exercise outlined above: â€Å"the training and fieldwork in particular contributed greatly to a better understanding by researchers and local government officials of farmers’ interests and needs. † (Jianchu et al 2009:398) â€Å"The experiences also suggest that strengthening the processes for peer networking, review, and support are powerful means to build capacities. † (Jianchu et al 2009:398) Project stakeholders took more active and empowered involvement in the project after their experience of being involved in the participatory evaluation. 5 Conclusion This essay has explored Monitoring and Evaluation (ME) as a means for assessment. It distinguished between Monitoring and Evaluation to provide context. Thereafter, it provided a critical discussion on the difference between Participatory Monitoring and Evaluation (PME) and conventional Monitoring and Evaluation (ME). Finally it focused on Participatory Monitoring and Evaluation and how it is used in practice with specific reference to a case study from China by Jianchu, Qui Vernooy.

No comments:

Post a Comment