Skip to main content

Our thoughts - Bethia McNeil

2021-07-08

This month, Bethia McNeil, CEO at the Centre for Youth Impact, ponders the meaning and emerging focus of ‘learning’ within evaluation.

 

Nearly ten years ago, I was part of the team that launched the Realising Ambition programme, a Lottery-funded investment of £25m that was intended to support the ‘replication’ of promising practice focused on diverting children and young people from pathways into crime. The fund was very focused on evidence-based and ‘proven’ practice, with the intention that supporting this (rather than other, ‘untested’) provision to be ‘replicated’ in new geographical areas or communities would improve outcomes at scale. In fact, it ended up being a five-year exploration of the complexities around adaptation, innovation and organisational development, with some challenging forays into randomised control trials along the way.

 

Relatively early into the programme, we started writing about the non-hierarchical nature of evidence, and the role of learning in evaluation. Our fourth ‘insight brief’ was called Proving vs Improving. Whilst we were far from the first people to be talking about re-focusing evaluation, it still felt sufficiently ‘new’ as a concept, and well worth saying. I recall feeling that we were being ever so slightly disruptive (although this could have been my younger self getting over-excited).

 

Now, nearly ten years on, it feels very different. Trotting out the old ‘proving vs improving’ line doesn’t feel disruptive at all – embracing the challenge and tension bound up in this statement is widespread. In 2021, I don’t think anyone would say “nah, I don’t think improvement matters. It’s all about the proof for me”. In fact, proving and improving have become much more comfortable bedfellows, with an appreciation of the nuance and inter-relationship between the two concepts. This is, in part, where the focus on learning has come from. ‘Learning’ sits nicely across proving and improving, and has found much more acceptance as a useful focus for evaluation. My former colleagues at Dartington Service Design Lab have just published a series of blogs on the rise of the Learning Partner, often replacing what would have been an evaluator’s role.

 

But what does learning through evaluation actually mean? This is a big question, and certainly not one I’m going to be able to do justice to here (maybe I’ll come back to it at some point). However, there are two elements of ‘evaluation for learning’ that I want to highlight and ‘problematise’, because they are very much on my mind.

 

Learning implies that one acquires new knowledge or understanding, and in this case, that it comes from the process or findings of evaluation. And here is the first challenge: I’m just not sure that many people – particularly, in the context of the Centre’s work, those working directly with young people - see evaluation that way. This is both a technical issue – how evaluation is designed - and an ‘emotional’ issue – how it is perceived. In my experience, evaluation is frequently seen as an imposition, usually initiated (whether directly or indirectly) by an outside agency or ‘other’ rather than emerging from the genuine curiosity or uncertainty of practitioners. The context in which evaluation is framed is critical in creating the potential and space for learning – put simply, it’s hard to be open to learning when you feel disengaged and irritated. Alongside this, it is undeniable that many of the dominant approaches to evaluation (think standardised, pre- and post- surveys for young people) are fairly abstracted from the relational, dialogic nature of youth work. It’s perhaps unsurprising that many practitioners find their learning elsewhere and feel that evaluation tells them little they didn’t already know.

 

Learning through evaluation also implies that one acts on that learning – this is the connection to ‘improving’ rather than ‘proving’. One is moved to do something differently; to let something go and usher in something new. To make space in one’s view of the world for different perspectives, and perhaps even to change one’s mind about something previously held sacred. This is hard. It’s particularly hard when you work in a sector that feels undermined and persistently questioned about the value of its work, and perceives evaluation to be part of this narrative. It’s additionally hard when you don’t feel that much insight will emerge from evaluation anyway.

 

One of the main consequences of this is performativity, which Tania de St Croix has written about extensively. Another consequence that really concerns me is a form of complacency – not the self-satisfied kind, but the disenchanted, futile kind. One of my favourite fictional lawyers, Mickey Haller, says of his role in the courtroom: “never ask a question to which you don’t know the answer”. It feels to me that this is perhaps what evaluation in youth work has become. The perceived imposition of a particular, high-stakes approach to evaluation means it needs to be held at arm’s length, and the frequent disregard for the realities of practice means evaluation is perceived to have little to offer that reality. These conditions create a context that is actively hostile to learning and improvement.
 

So what to do? This is, without wanting to over-egg it, the focus of my entire career. If there was an easy answer, we’d have all found it by now. But we’re making positive steps together – developing and testing approaches to evaluation that feel better aligned to practice (both in terms of process and reality), and trialling models of continuous quality improvement that are based around the naturally reflective nature of youth work. I look forward to a time when there is an optimism that evaluation can offer us knowledge that we don’t already have, and we are comfortable asking questions to which we do not know the answer.