#rED15: Some Thoughts

Last Saturday I went to the third annual researchED national conference. As we have come to expect from researchED, the event was a hit and eminently enjoyable. Who would have thought a desire to raise research literacy among teachers and in so doing instantly raise the professionalism of the profession would be so popular? .~

Having feet in both camps (teacher and researcher) I can see nothing but brilliance in bringing the two together in this way.

Lots of people have written about the whole event in blogs, so I just want to reflect on a few things that struck me over the course of the day.

The first concerns outcome measures. Why does so much research on the effectiveness of interventions intended to influence teacher practice use teachers’ self report as its outcome measure? In three of the talks I attended (the only talks that I went to that reported on specific studies), teachers’ own perceptions of their practice was used as a measure to evaluate the success of the interventions being investigated. That is to say, if teachers said they were doing something, the researchers assumed that they were actually doing it. There are obvious shortcomings to this approach. Teachers perceptions about their practice are important, but teachers are human and everyone knows how good we humans are at examining ourselves dispassionately. It strikes me as imperative that if we want to determine whether an intervention or an approach has actually influenced teacher practice we must actually observe their practice to find out. This could be through direct classroom observation, looking at lesson plans, or through work scrutiny. At the very least it would provide an opportunity to triangulate results, and in so doing potentially elevate the trustworthiness of the research.

I wrote about an example of this in this post about Ofsted. In two studies on the effects of feedback from Ofsted inspectors on modifying teacher practice, the authors found that some teachers said that they were inclined to change their practice based on advice they had received. The authors did not go on to find out, however, whether these teachers did or not. That seems like an opportunity missed to me.

Is that really what the study finds?

Is that really what the study finds?

Related to this issue was an article in the special TES supplement given to all attendees at the event about the effectiveness of Twitter as a tool for Continuing Professional Development (CPD). In this case however, it was the author of the article who appears to have over-egged the methodological pudding. He claims that a study had found Twitter to be a more effective way to conduct CPD than traditional methods. Teachers’ perceptions and opinions are important, particularly when assessing the feasibility of an intervention, but the sub-heading (pictured) can’t be justified by the study on which it reports. Teachers saying they like Twitter as a mode for CPD is a far cry from concluding that it is more effective than traditional methods. At least the author has included enough information about the study for any research-savvy reader to see the mistake he has made. Nonetheless, I’m sure he would have benefitted from attending Max Goldman, Alex Quigley or Rob Coe’s sessions. I hope he did – they were excellent.

Speaking of Rob Coe, a second point that struck me as interesting came during his very cheering assessment of how far educational research has come since his 1999 ‘manifesto for evidence based education‘. Quite apart from the observation that the education research community is warming to the idea that unbiased allocation to comparison groups is a helpful feature of educational research, he made what I feel is a really important point about what we are actually testing when we engage in large scale research. He used AfL as his example to say that what we know about AfL is not actually what we know about AfL, it is what we know about the effects of telling teachers to do AfL, which is related but quite different. The point I believe he was making is that the often relatively tightly controlled environment in which small scale research is conducted does not reflect what happens to an intervention once it has been released into the wide world of schools. Once an intervention has been assessed as effective in the former, it enters the latter and is battered and modified and reimagined and tried out and dissected and chewed up and spat out. Once it has been embedded (if it ever is) then it may well be a very different beast to the beautifully crafted strategy born out of an educationalist’s considered understanding of the problem she or he wants to address. To me that says that there is an important conversation to be had about this. When we assess educational interventions, should we conceptualise the investigation as attempting to assess the effects of Intervention A on Outcome B or the effects of telling teachers to do Intervention A on Outcome B. I think that this is an important and realistic distinction. It would also acknowledge the point that many ‘negativists’* use to bash people who value research designs that use unbiased allocation schedules to create comparison groups, that education is really complex and that you can’t isolate the active ingredients.

My last observation is a small but important one. In Naomi Flynn’s talk about creating the EAL MESHGuide with Hampshire EMTAS, and in the talk that followed by Donna Barratt and Catherine Brown on deepening teachers’ understanding of the needs of EAL pupils, all described how important engagement with end users was for the success of their projects. So, adding to the point I made at the top of this post I see nothing but brilliance in bringing in end users to foster a symbiotic relationship that includes researchers, teachers and parents/pupils to help us work out what really works.

*negativist: someone who uses the word positivist as a term of abuse

2 comments

  1. Pingback: ResearchED 2015 blogs and presentations | A Roller In The Ocean
  2. Pingback: Negativism | L3xiphile

Leave a comment