This week’s blog post is by Tom Arbuthnott, Deputy Head (Partnerships) at Eton College. Tom has been leading cross-sector school partnerships in Birmingham and Berkshire since 2010. He is a co-founder of the Schools Together Group, and hosts a blog which focuses on cross-sector partnerships. He was the lead editor on ‘All Together Now’ and ‘The Missing 2000’ and is currently working on a new publication about partnership teaching’
Did you know, when fired from a trebuchet, that a swede travels further than a cauliflower?
Among many other historical experiments, this was one that Jonathan Davies, who I wrote about in my last blog, used to run with primary-aged children across Birmingham.
Jonathan had constructed no fewer than three trebuchets of different sizes – not an item of kit that most schools invest in. I think I remember his joy at successfully purchasing a cannon.
Another thing that Jonathan took great joy in were the numbers associated with his outreach programme to local primary schools. He chuckled as he told me that, since I’d taken over running his diary in 2012, his average year consisted of visiting 79 schools, presenting practical history workshops on everything from building Stonehenge to discovering steam-power to almost 4,000 children annually. He drove over 600 miles around Birmingham, usually in the rush hour, to get to them.
I was horrified to discover, since writing my last blog, that a senior leader in Jonathan’s school (who shall remain nameless) had written to him to discontinue his work, stating that it did not have ‘measurable impact’.
‘Nonsense’, I wrote back to Jonathan.
This contretemps, though, does raise the thorny question of ‘impact assessment’ of school partnership projects.
Three statements need to be made here.
1. Of course we want to assess the impact of our partnership or outreach work, especially if there is cost (such as staff time) associated with it or if we are reporting back to generous benefactors (or sceptical governors).
2. It is difficult to do so given that there is no consistently appropriate output measure and given that much of the power of partnership work lies in unmeasurably fostering ‘change mindset’ and in avoiding insularity.
3. As a result, we therefore have to try to come up with numbers that are as convincing as possible but which are always, at base, going to be anecdotal.
I see three common models.
The ‘Impact Book’
The first is the ‘impact book’, which is sometimes defined more by its glossiness and by its volume than by its science. The principle behind this approach to reporting is that if you gather together all of the external-facing work in a year done by staff and students into a single volume then it looks pretty impressive.
And it really does! Schools are complex organisations that are by definition charitable; so any school, state or independent, will tend to generate a huge amount of outward-facing activity in any given year. Put it together and it looks excellent, particularly given opportunities to assemble photographs of happy kids. I’ve always rather resisted producing one of these because it feels like a lot of work to produce something that does little more than state the obvious; but governors love them as artefacts.
The ‘Impact Card’
The second is the ‘impact card’, which tries to reduce the same set of activities to a set of numbers – a little like we tried to do with Jonathan’s history workshops. Schools try to identify statistics such as x number of hours of student-to-student interactions or y% of the maths curriculum covered in mentoring workshops.
Elements of this are often qualitative: as a partnerships co-ordinator, as long as I always remember to ask what the students think of a given event, I can generate all sorts of positive attitude numbers without trying too hard (such as ‘78% of students learned “a lot” in my geography workshop’). This approach tends to accentuate the positive: in my experience, very few students are willing to complain about a fun day out of school, however well-constructed the content.
Proper researchers would comment in an instant that comparing swedes, cauliflowers and cabbages is fundamentally meaningless. However, this form of impact assessment looks convincingly forensic from the outside.
Relating partnership activities to outcome measures
The third, often driven by government, lies in trying to relate partnership activities to defined and independently calibrated outcome measures. Usually this needs to fit around structured assessment processes (such as, for example, termly maths tests on a certain element of the curriculum). It will tend to state something like, ‘kids who participated in partnership activities generally progressed x grades during the course of their Year 10 studies’.
Where this data can be generated, it is clearly effective, despite the difficulty of untangling the impact of the partnership project from those of any other interventions. To be rigorous, though, you’d need a randomized control trial to compare students who’d been on the sharp end of partnership work with a comparable cohort of students who hadn’t – a very difficult thing to achieve in the messy world of schools.
Review of the three methods
I find all three approaches both satisfactory and unsatisfactory in different ways. I have a rather druidical belief that most education happens by magic in the ‘black boxes’ of children’s heads and experiences. It follows from this that the more experiences that you can fire at a given child, the more chance you have of creating that magic. I think many of the best outcomes of education are by definition unmeasurable because of that. Maybe my belief stems from my background as an English teacher, where I know that ten given kids will pick something different out of even a Larkin poem as their key learning point.
I reflect sometimes on the learning which came out of the ‘All Together Now’ project, which brought together eight outstanding partnership projects in the world of music. The great Simon Toyne, Executive Director of Music at the David Ross Education Trust, wrote about putting on an opera, Noye’s Fludde, for thousands of children across Lincolnshire and Northamptonshire – and, for that project at that particular time, one key outcome was that 63% of those children hadn’t attended an opera before, let alone performed in one. It enabled a one-off magical new experience for lots and lots of children, which cannot have failed to have opened eyes and broadened experience.
In my mind, I contrast this with the equally outstanding strings project at Royal Grammar School Guildford in the same publication, where, of 110 students who had chosen to continue their studies after the scheme, over 8 years, only 10 were not still playing a stringed instrument at the time Dale Chambers wrote his article in 2018. This is a much smaller number of students, but an incalculably larger depth of experience.
Of course, to have an education system you need to have both breadth, like Noye’s Fludde, and depth,like the strings project, as valid and impactful goals. Indeed, I would commend Dale’s writing about this on page 44 of ‘All Together Now’ to you, where he writes movingly about how he calibrates the impact.
The importance of synergy
In my view, as an experienced school partnerships co-ordinator, the real indicator that I’m looking for is synergy. An awful word, I know, but I often feel that my real arbiters of success are cultures of co-operation and association as a shared habit between schools.
Synergy is often measured at the interstices between different projects. I know that I am achieving a massive amount more by taking a strategic approach to partnership rather than sponsoring lots and lots of micro-projects which wind up having massive breadth but no depth.
Concluding remarks
In sum, I am not meaning, in this blog, to downplay the importance of impact assessment by outlining its difficulties. All organisations spending either charitable or taxpayer funds should evaluate their impact very carefully. In fundraising charities, it is not uncommon to see as much as 5% of budgets spent on a process of impact assessment, because donors expect to see their money spent effectively. An advanced structure for demonstrating this efficacy is tremendously important. But it is important for your organisation or your partnership to realise that your solution is likely to be the least worst way of doing it.
So, how best to assess impact? Here’s two tips.
1. First, look at organisations like ImpactEd, who might be able to help. Lots of cross sector partnerships are outsourcing elements of their impact assessment to organisations like this, and I have heard strong recommendations from some of those on the circuit. I recommend taking a look at them.
2. Second, if possible, employ an impact assessment co-ordinator. In my school, we have decided to do this. Part of their role is to devise metrics which work for our particular context and which make use of the data which we habitually collect. However, you need a certain volume of partnership activity to make this worth its while.
To go back to Jonathan’s trebuchet: every project I oversee is different – just like those vegetables were different. Each project will have a different flightpath and we will learn something different from each one. What matters is not the specific method of recording those flightpaths – but that we do record them, in thinking hard about how to design projects which make a difference to children’s lives.