We know from the research, and our own experience, that investing in monitoring and evaluation is a smart strategy for organisations attempting to solve long-term wicked problems and complex organisational culture issues. It makes sense for organisations spending millions of dollars on change management initiatives to know how much bang for their buck they are actually getting.
Monitoring and evaluation (M&E) is the holy grail of government departments and organisations that are heavily focussed on monitoring performance or achieving outcomes. Organisations that strive to be effective in delivering the intended outcomes of a program or strategy, tend to have a much better chance of success if they invest regularly in monitoring with periodic evaluation.
Monitoring is the systematic collection of data over the life cycle of a program, project or strategy that keeps a check on the progress of all the activities being implemented and recording whether or not they are on track towards meeting the goals and objectives of a program or strategy. Evaluation on the other hand, measures the impact of the activities on the potential beneficiaries or recipients. In essence, M&E is a robust process of collecting data and measuring the performance of a project, strategy or program over its life course.
Most programs or strategies fail to achieve their intended outcomes because there is lack of thinking and planning. Therefore, an essential first step for any organisation is to develop a monitoring and evaluation framework or a plan, ensuring that any new program or strategy is regularly examined over its lifespan. It is important to figure out what will be monitored on an on-going basis, what will be evaluated periodically over time, what activities will be implemented and how often, and who takes on the responsibility for consciously monitoring and planning for evaluations.
Successful monitoring and evaluation frameworks are underpinned by a theory of change or program logic. This often guides the key evaluation questions, as well the performance indicators, that maps out the step-by-step process of a performance story that can illustrate a causal relationship between program interventions and intended results.
One of the common problems that organisations struggle with when designing M&E frameworks is their failure in setting time-bound priorities and not knowing the right methods and processes to measure intended results. The Rainbow framework can help organisations in their M&E planning processes (that is, how to manage, define, frame, describe, understand causes, synthesise, and report and support use) and offer different options for methods and processes for organisations to use in designing a ‘fit-for-purpose’ M&E framework.
Here at Rapid Context, designing a ‘fit-for-purpose’ Monitoring and Evaluation is not new to us. Over the years, we have built a strong track record in developing and implementing rigorous research design underpinning culture change M&E frameworks for complex organisations. If your organisation is implementing a program or strategy, and you are grappling with ‘what to measure’ and ‘how to manage’ it better, get in touch to speak with our experts on Monitoring and Evaluation.
Priya Chattier is a senior research consultant with Rapid Context. She has expertise in qualitative research design, mixed-methods, survey design, applied social research in gender, and monitoring and evaluation. Extensive research experience on gender issues in the South Pacific region, including Fiji, Solomon Islands, Tonga, and Papua New Guinea, including field experience in diverse and marginalised communities in remote islands.