Evaluation & Adaptive Learning

Results for Development uses evaluation and adaptive learning to improve the performance of health, education and nutrition programs that can contribute to stronger systems. 

We conduct targeted research to inform partner strategies, design and implement real-time monitoring systems, and work with partners to design, experiment with and evaluate ways to accelerate their impact.

The Challenge

Funders, policymakers and program managers are constantly looking for ways to design and implement impactful development programs, which can be surprisingly difficult to do. Programs often underperform for a number of reasons, including:

  • A lack of compelling evidence on what works (and why) that can inform the design of new programs or policies;
  • Insufficient feedback during implementation about whether a program or policy works, why or why not, and what can be done to improve it;
  • Uncertainty about how to adapt a promising approach and embed it or scale it up within a system.

Monitoring, evaluation and research methods have the potential to increase impact, but that’s not typically how they have been used. Often, these methods are used for accountability — to determine whether impact was achieved — and by academic researchers who may prioritize different questions than practitioners. Results for Development (R4D) seeks to fill this gap and uses monitoring, evaluation and research methods to answer partner questions about program design and implementation and to promote a culture of real-time learning.

Our Approach

Whether undertaking research to inform a partner’s strategy, designing a new program, or optimizing a successful one already operating at scale, R4D works with partners to generate the evidence and feedback necessary to inform decisions.

We conduct targeted research using methods such as randomized control trials (RCTs) to answer questions relevant to donor and practitioner strategies, such as, “Can transparency and accountability interventions improve health outcomes? And why or why not?” We help partners use monitoring systems to get valuable feedback in real-time, such as regular tracking of student literacy progress. And we work with partners to design, experiment with and evaluate solutions to accelerate their impact, such as testing different approaches to increase parents’ use of mobile technology to read to their kids.

Our engagements involve some or all of the following stages:

  • Understand challenges: Understanding performance challenges and defining impact goals
  • Identify solutions: Reviewing existing evidence and defining a defensible theory of change for the program or intervention
  • Design & Experiment: Prototyping, piloting and refining solutions based on ongoing and regular qualitative and quantitative feedback
  • Evaluate: Evaluating the impact of those solutions
  • Monitor: Scaling up, monitoring and identifying areas for improvement based on regular feedback

All of our engagements involve regular Learning Checks, during which we come together with our partners to review findings from the data collected and discuss implications for program implementation. This gives partners the opportunity to reflect on the learning activities, to brainstorm and plan for how the findings can support the refinement of future implementation, and to iterate accordingly. We also look for opportunities to share our learnings with peer programs and the broader development community.

Fostering Digital Communities

We create and support global communities of innovators, funders and policymakers for continuous and iterative learning, knowledge generation, exchange and collaboration. Click on the logos below to explore our communities: