In this blog, our Evidence and Impact Manager, Annie Barber, shares our approach to measurement.
“At Khulisa, we don’t believe in measurement for the sake of it. We don’t measure to tick boxes. We measure to learn whether our programmes are working and to see how we can improve.
Lately, we’ve been doing a lot of work looking at how we measure the changes our ‘Face It’ programme brings about in schools and Pupil Referral Units.
We work with some of the most traumatised and vulnerable young people in the UK. Our participants are likely to be right in the middle of a tumultuous time in their lives, battling mental health issues and feeling incredibly scared and stressed out. This leaves us in a bit of a measurement conundrum.
One the one hand, we pride ourselves in being a data-driven, evidence-based organisation and have always taken measurement seriously. On the other hand, we all know how it feels to be forced to complete piles of paperwork – it’s dehumanising and frustrating at the best of times.
One the one hand, we pride ourselves in being a data-driven, evidence-based organisation and have always taken measurement seriously. Without collecting data about the changes our programme is (or isn’t) bringing about, how do we know it’s even working?
On the other hand, we all know how it feels to be forced to complete piles of paperwork – it’s dehumanising and frustrating at the best of times. Our participants can struggle with reading and writing, many have English as an additional language, and most are just too stressed out or traumatised for lengthy forms.
So, what to do?
First, we are crystal clear about what is essential to measure.
First, we are crystal clear about what is essential to measure. We’ve produced a detailed Theory of Change and are very specific about how we intend to improve social and emotional wellbeing. ‘Face It’ does this through improving emotional regulation, coping skills and resilience – so this is what we need to measure. Our Theory of Change is a lot more complex than this, but you get the gist.
Next, we only use validated survey tools. Most importantly, they must be short and simple – less is more, simple is smart! This is important for two reasons. First and foremost, to avoid putting our vulnerable participants through an unnecessarily painstaking process. Second, because it is not effective to have vast amounts of poor-quality data, and this is what you get if you stick long-winded surveys in front of stressed out people.
Our few and focused questions are all positively worded. The last thing we want is for our participants to think we come with any preconceived negative assumptions about who they are. Negatively worded questions can give this impression and start us off on completely the wrong foot.
The other thing we do is pilot our measurement tools. We ask our participants how it makes them feel, whether there’s anything they dislike, whether it’s too long or makes them feel uncomfortable in any way. We do this a lot, and we make changes based on feedback. We also offer the option of completing the survey on a tablet or on paper – anything to make the whole process as user-friendly as possible.
None of this is ground-breaking, but these principles can sometimes get lost when it comes to measurement in our sector given the many pressures.”
Putting wellbeing at the heart of any measurement process will lead to good things for everyone – happier participants and better quality, more accurate data on the things that actually matter."