Multi-site trials of practice: an RCT first for the Centre
This month, we’re announcing a major new project. We are so excited to be working with the Centre for Evidence and Implementation and Bryson Purdon Social Research to test an innovative approach to evaluation: multi-site trials of practice.
Those who have worked with us over the years will know that, to date, we haven’t been involved in a randomised control trial (RCT). In fact, we have spent much of our existence convincing the sector that we are not a team of ‘neoliberal randomistas’. We hear and very much acknowledge the concerns that surround RCTs: the focus on ‘manualising’ provision (and really sticking to that manual!), the ethical unease that comes with allocating some young people (who meet the criteria for a service) to a control group, the focus on causality rather than complexity, and the demands of data collection. Not to mention the the stakes at play - many RCTs show ‘no-effect’ results, potentially dealing a blow to organisations and posing real challenges in how to respond. We recognise and empathise with these concerns. But should we outright reject RCTs as an evaluation design for youth provision? Critically, RCTs remain the ‘gold standard’ for many, particularly the What Works Network. It has been long been the case that youth work and informal/non-formal youth provision has had limited visibility within the What Works Centres, in part due to the small number of RCTs undertaken in the youth sector. At the Centre for Youth Impact, we continue to make the case loudly for valuing diverse forms of evidence, but is there also an argument for thinking about how RCTs could also be beneficial in understanding the impact of provision for young people?
Multi-site trials are one potential way forward. We think multi-site trials could be an approach to RCTs that better goes with the grain of youth provision, leaning into the responsive, flexible nature of working with young people while also nurturing communities of practice. Multi-site trials recognise that the vast, vast majority of organisations working with young people are small and community-based. They are also structured to focus on ‘practice’ rather than ‘programmes’. This is an innovative approach to evaluation. We want to know whether it’s possible to support a group of youth organisations to work together in a randomised control trial, focused on a common area of practice. This will involve testing the capacity of youth organisations to collectively recruit the numbers of young people needed for the trial, to ‘randomise’ the young people into two groups (one that starts mentoring immediately, and one that starts later) and gather the data necessary from both groups, while delivering a ‘practice model’ that is shared across the group. This is not a ‘programme’ or ‘project’, but a bundle of practices that are recognised as representing a high quality approach to delivery. In our study, the practice focus will be on mentoring.
Anticipating ethical concerns surrounding the use of a control group, we have decided to use a wait list design so that all young people involved will receive mentoring over course of the study. This is not a decision made lightly - one of the main trade-offs is that we will only be able to measure intermediate outcomes for young people and it won’t be possible to track differences in long term outcomes for the group. This was a trade-off that we agreed as a partnership was worth making for this study and we hope that it will mean that youth organisations will be more comfortable with the idea of randomising young people into the two groups.
So why did we, at the Centre for Youth Impact, decide to get involved in this study? In many ways, multi-site trials bring together many elements of our work within the youth sector: establishing common language and shared definitions of practice, using shared measures, and recognising work with young people as components of practice applied responsively and flexibly to meet the needs and interests of young people at a given moment. In this study, we aim to open up evaluation to focus on how practitioners use a range of practices flexibly, in the moment, and across different settings and contexts, rather than framing provision as a tightly defined ‘intervention’ or ‘manualised programme’ - we know that work with young people is seldom viewed or experienced in that way. This could be potentially groundbreaking in how evidence is generated and applied in the youth sector.
Importantly, multi-site trials recognise that effective approaches to supporting young people are more powerful when they’re shared across the sector rather than sitting with one organisation as a specific project or programme. This evaluation leans into the variation within the youth sector and the value of flexible, responsive work with young people. It also recognises that ‘what works’ is most likely to be a set of practices or approaches that are used flexibly in response to local context and the needs of young people, with the same practices appearing commonly across the youth sector. We know that young people thrive when they have a variety of high-quality experiences and relationships, across different settings, contexts, and through different activities. In seeking to understand and advance ‘what works’ for young people to develop positive outcomes, we should want to understand, in a granular way, how and why experiences are meaningful and seek out opportunities to provide those experiences regardless of intensity or scale of our relationships with young people.
This approach is more inclusive for smaller community-based organisations. It makes it more possible for smaller community organisations to be involved in an RCT because they’re working as part of a group of organisations. Traditionally, smaller organisations have found it hard to participate in RCTs because they struggle to deliver to the large numbers of young people necessary for the research. Additionally, the practice model will be built collectively, drawing on existing evidence and the experience and expertise of delivery partner organisations. It is likely that all delivery partner organisations will have to flex their practice a bit, and this is in no way an opportunity for a one dominant organisation to simply roll out their mentoring offer through the other DPOs.
Accountability for the success of the trials will be shared by everyone - but ‘success’ is not just about impact on the lives of young people. Firstly, the main research question is about whether or not this approach to evaluation is possible, not about whether the DPOs mentoring practice is effective (though we do hope to learn more about the quality and impact of mentoring through this study). Secondly, in studying a practice model rather than a programme, it means that individual organisations’ offer will not be singled out as effective or not. The intention is that this research project should feel lower stakes than a traditional RCT.
While we are excited about this project, we also recognise that it will stretch us and the youth organisations we’ll be working with into new ways of working, some of which may be quite challenging. We’re providing funding for organisations to take part in the study, and we’ll be working closely with them throughout. Part of what we want to learn about is what level and type of support youth organisations need to be successful in multi-site trials, so this is something that we’ll be carefully monitoring. If an organisation is unable to continue beyond a phase of the study, they will be supported to exit well and their contribution will be celebrated. And we’ll be talking and writing about what we’re learning along the way.
More information for youth organisations interested in taking part can be found here.