This blog was written by Emma Gibbs, International Education Consultant for Education Development Trust. She contributes to the design and development of school improvement and education system reform programmes globally. This article summarises the process and outcomes from a write-shop collaboration with DFID advisers at the last UKFIET conference.
Education policymakers are used to seeing results from small scale pilots. But successful local innovations and interventions, which may have impressive evidence from research trials, do not always translate into results at scale. This is an ongoing frustration for policymakers and, faced with the learning crisis, they are increasingly asking different questions, wondering how to achieve school improvement at scale. To investigate this issue further, Education Development Trust worked with the UK Department for International Development. Here, Emma Gibbs, Senior Consultant, reflects on the experience, and the insights that emerged from the collaboration.
In September 2019, Education Development Trust was invited to facilitate part of a DfID write-shop event for a small group of education advisers. The event, which formed part of the bi-annual UKFIET conference in Oxford had a simple goal: to provide the advisers with unencumbered time and space to write those blogs, articles or think-pieces that they had been thinking about for months (or even years) but never quite got around to. Among the attendees, several were interested in writing about scaling – a subject of ongoing debate among many education advisers and other aid professionals. This topic has garnered increasing attention in recent years, as education improvement programmes which show strong results when implemented at a small scale often find that their positive gains are lost once the programme expands to cover a wider system, or when researchers seek to apply the same model to replicate its effects in new countries or contexts.
Indeed, the issue of scaling is one with which Education Development Trust has been engaged in for several years, through our experience of designing large-scale system reform programmes, our work on adaptive programming and responsive M&E frameworks, and our work supporting NGOs and other organisations with their own scale-up processes. Our recent project to support STiR Education’s scale-up in Delhi is one such example.
In planning for the write-shop, the high levels of interest in the topic of scaling and the array of case studies emerging became apparent. As a result, researchers from EdDevTrust partnered with DfID’s Senior Research Fellows and advisers to collate the ideas and develop an article, which is now published in the international comparative education journal Compare. The research features a range of case studies – from India, Ethiopia, Rwanda, and Nepal, among others – drawn from real-world reflections from aid professionals attempting to improve education outcomes for as many children as possible (i.e. to delivery at scale). While anecdotal, they come from a place of real-world expertise, and form a crucial part of a currently limited evidence base.
With that in mind, and rather than lamenting the lack of evidence, or calling for more research, we have used this collaboration as an opportunity to improve our understanding of what has worked well in these real-life cases. Across all contexts, we see an emerging need for whole systems thinking: in other words, we need to tackle the problem in a way that accounts for the complex systems that education programmes operate in, at a regional and national level. Based on this, we have been able to highlight three key insights which help us to understand how to take education interventions to scale.
- Efforts to scale work better when we work with the existing ‘architecture’
A few of our case studies demonstrate that the success or failure of scaling efforts often relate to the way in which the programme is delivered. More specifically, the extent to which the programme takes existing system structures or ‘architecture’ into account is likely to impact its success or failure at scale. For instance, in Nepal, clear national policy directions have been inhibited in their spread due to a lack of capacity in the middle tier of the education system, while in Rwanda, highly centralised policies and processes have created an enabling environment and capacity for teacher professional development on a national scale. We do not suggest that any particular system architecture is more conducive to successful scaling (the case studies include examples from both highly centralised and decentralised systems), but rather that scaling is more effective when the programme takes system architecture into account – in both its design and implementation.
- Successful scaling requires capacity and leadership to be distributed throughout the system
Programme delivery at scale relies on the people, relationships, knowledge, skills and mindsets that already exist within a system – not short-term hired-in expertise or manpower. To be successful at scale-up, however, these roles must be further developed. Our case studies demonstrate examples where this has been done well, particularly where leaders are distributed throughout the system, so that change is driven from the middle-tier. For example, in Rwanda, teacher mentors have been trained to lead capacity development of large cohorts of teachers.
This widely distributed capacity building and leadership is not only a tactic for ensuring cost-effectiveness or sustainability, but also (and perhaps more importantly) an effective method for ensuring that programmes are accepted by those implementing them. Such increased ‘collective leadership’ within the programme, is likely to result in the shifts in mindsets or organisational culture required to effect change at scale.
- We must find ways to make institutional culture change ‘stick’
Institutional culture change can also serve as an indicator that a programme will succeed at scale, and that stakeholders are altering the ways in which they think about and approach elements of the programme.
It is important to acknowledge the complexity of such institutional culture change, but equally, the study clearly demonstrated that scaled-up programmes which neglect this may have a hard time ‘sticking’.
While there are no definitive answers, we can offer some practical insights. Strong, focused relationship-building efforts can have a positive impact on organisational change. In Ethiopia, for example, strong relationship-building efforts bolstered a feeling of ownership in the GEQUIP programme, contributing to a wider culture change. Methods to ensure genuine co-development of programmes between donors and ministries of education can also be effective. In the STiR programme, for instance, the Delhi government was heavily involved in the design of the intervention.
Navigating the path to scale
While we hope that these insights prove useful as other donors, NGOs and implementers consider scaling, it is clear that more work is needed. Anecdotal evidence is helpful, but not enough to inform a robust understanding of how scaling really works – while traditional, impact-focussed evaluations are also insufficient, as they do not allow for an exploration of the complexities of scaling that we highlight in our research.
While the case studies featured in our research provide us with a starting point for developing new evidence on how to scale successfully, moving forward, we want to move beyond the usual call for further research. Rather, we want to appeal for new ways of thinking to help us navigate the path towards successful scaling, and build on the success factors we have identified to date.
For this, we need new data to capture the things that we do not traditionally measure – less tangible factors such as culture, practices, relationships, and politics. We also need new methods, that allow us to assess what works, while navigating the inherent complexity and unpredictability of the scaling process. This might include the real-time data collection and analysis methods beginning to emerge globally – such as Education Development Trust’s Learning Partner approach.
Finally, we also need new attitudes which move away from the traditional notion that a programme, once scaled, can impact education outcomes all on its own. Instead, the focus should be on the system as a whole, and accompanied by a shift in mindset. We must therefore shift from simply asking what impact intervention x has had on beneficiary y, towards different questions, such as ‘how can we consider this programme in relation to the system as a whole?’ – and, more radically, ‘what contribution does this programme make to the learning crisis?’.
This is an area of expertise for us at Education Development Trust: our school system reform framework uses a whole system approach, recognising that no silver bullet or isolated intervention will improve learning outcomes. This framework, based on evidence on complex reform, helps policymakers and practitioners consider the key capabilities needed to support sustainable systemic reform. This includes all of the factors identified above: working within existing architectures, building leadership and capacity throughout a system, and sustainable institutional culture change. To find out more about our expertise in school system reform at scale, click here – or contact us.
To read the published article in full, please click here, or to find out more, please get in touch.