Let’s start with a basic fact. All facts are constructs. As Martin Heidegger would have put it, no construct is a description of the thing in itself. It is a proposal, a representation, an attempt at a description.
Take a supposed ‘solid fact’. Take, for instance, the fact of gravity. Gravity is a fact, surely. As eggs is eggs, everything is subject to gravity.
Well it’s certainly the case that an attractional force between objects appears to act across the detected universe, but what that force precisely is, is still not agreed. From (apocryphally) seeing apples falling on heads, Newton described this universal force in the way we all learned up to GCSE. Einstein took Newton’s model of gravity and picked holes in it, describing gravity in the way students learn it now at A Level. Meanwhile, post-docs explain to undergraduates that accept Einsteinian models are themselves only inadequate constructs. A better construct is being sought; a unifying theory of forces, still not described by Hawking and co.
Physicists accept readily that they are merely dealing in representations of reality, not reality itself. Perhaps that is why physicists, such as Fritjof Capra and David Bohm, pioneered what they call a ‘new science’, open to the non material, the ambiguous, the paradoxical, the spiritual. Mystery. They know that hard knows of real matter actually dissolve before you can touch them into states of energy and proximity.
By contrast, it is biologists (I confess, my first degree) who tend to be religiously convinced their ideas are unarguable, utterly certain and right. Think of the militant evangelicalism of Richard Dawkins, that great campaigner for the truth that the world is nothing more than nuts and bolts. His epistemological absolutism is, I suspect, a little odd to physicists who know all versions of reality are vague approximations, mental constructs designed to get us slightly closer to the things we can never actually claim, or touch, or control.
Biologists tend to miss this because their world sits somewhere between the empirical experimental machineries of the particle physicist at Cern, and the narrative writing social scientist at the RSA. Biologists dismiss physicists as dealing with forces at a level ‘unable to explain complex, emergent properties of organic life, reproduction and speciation’. At the same time, they poo poo the ‘loose, shoddy subjective non-empirical pseudo-science’ of the social scientist, such as the educational researcher.
The lot of the educational researcher
This, in fact, is the lot of the poor social scientist, of which the educational researcher is one. To be kicked in the face by the ‘hard science’ bully. And perhaps that contributes to the endless existential crisis that afflicts social science research; it certainly seems to be part of the lively energy surrounding the debate around educational research methods and validity that schools are becoming alert to.
The core problem for the educational researcher, as a social scientist, is this: How can I do valid research? He faces methodological challenges unknown to the scientist: Human beings can’t be put in test tubes. They can’t be dissected. They have wills. They have to be asked permission to take part in his project, they slip out of the constraints of the experimental designs set them. He sets up his case-study based study. Five students are absent from the recorded first lesson, skewing his data. He watches ten lessons….. (my! what a lot if observational data he now has to analyse….) yet those lessons were just 3% of three hundred taught that week. How can he generalise from that sample size? What claims can he make that would possibly be valid beyond the limits of that particular experience? Thus the educational researcher feels constantly anxious terrified his research doesn’t really count, or worse, will be counted as ‘bad research’.
Bad science or bad customers?
Some educational researchers retreat to empiricist methods. Quantitative studies are commissioned on huge sample sizes. Claims are made, but how valid are those claims to the real-life of the classroom? For example, what if one study examines 5,000 students to see if they turn right rather than left after being shown more red left signs. Yes, we now with confidence know students turn left when shown red signs. But so what? What can we extrapolate from that? How much weight can that finding bear when predicting human behaviour in complex real world situations where students make hundreds of decisions to turn left and right moment by moment? The finding is valid but is it useful?
Generalising applications from limited, circumscribed data has been a route to poor educational product development. In the rarefied abstracted confines of a lab, or campus, looking at only a few factors in millions of interacting ones, a researcher publishes a claim. Maybe it is a claim about the way the brain processes different kinds of data (just as a random example…..). The researcher states that their claim is a proposal, a construct, an artifice to describe a small phenomenon they observed. It is a way of putting that finding into language. It is not the truth.
But teachers don’t like constructs; they like tools, they like ideas that they can use, can put into practice.
This desire for practical tools creates a market for someone to translate that small, limited, prescribed, tentative claim and turn it into a tool that teachers can use…… hence:
Brain gym, learning styles, VAK, EQ, Thinking hats, mindset….
These tools are created because there is a demand from teachers for them. Teachers are in the business of doing, getting results in a classroom. They want practical things that can help them to that not tentative research constructs. So a company creates those certainties for them. Sometimes the original researcher is involved in the tool development and sometimes not. Sometimes the researcher is in despair with what the teaching profession goes on to do either their subtle qualified claims and constructs.
Neuroscience has been most susceptible to this kind of poor adoption. Sometimes the neuroscience itself has been bad science. More often, the application of the science by teachers has been bad practice. Neuroscience has that seductive appeal, the promise of unlocking the kernel of what learning actually is. But neuroscience does not and, indeed, cannot achieve that. Peering into the neural activity of thirty teenagers rampaging in science, lesson three Monday morning, is currently beyond the scope of the fMRI scanners. Teaching may draw on bits of hard neuroscience but in the end, classroom teaching is a social collective experience. Neuroscience does not adequately deal with collective cognitive affective phenomena. No, teaching is informed by studies inside the brain but it will never be fully described by them. Teaching is a live happening, a collective event.
Proper confidences of educational research
That is why the appropriate discipline to measure teaching and learning must remain social science. And teachers must be confident social scientists when they research the efficacy of their methods. They must be confident in what it can claim and accepting of the limits of their social science research.
So what are the limits and confidences of educational research?
Because studies take place in real schools and schools are particular not abstract instances of education, conclusions from educational research are always open and conclusions revisable. This applies to both quantitative but especially qualitative studies.
When teachers act as researchers into their efficacy of their own schools, they must acknowledge their own impact on the study outcome. There is a major ‘Hawthorn Effect’ when teachers lead internal school research whereby the presence of the observer, as well as the sheer fact of conducting the observations, will by themselves improve results. Hattie’s comparative ‘effect size’ model explained in Visible Learning is one way to mitigate against exaggerating results.
Because studies are done ‘in the field’ rather than in the lab, there is a good chance the findings will be relevant and useful rather than valid but useless.
Social science methodologies (case study, interview, participant survey, observation) are all valid. Quantitative methods (tests, controlled assessments) have some advantages of reliability (sample size, limitation of conflicting factors, reduction of noise) but disadvantages of applicability (how useful are the conclusions). Bigger samples are always better than smaller samples whatever method.
The researcher must know which methodological tools he is using, not claiming to be screwing a bolt when he is bashing a nail. Beware the researcher who does not know which methodology he is using, why and what evaluation of data it will enable him to do by what method.
Beware the heavy boot of higher education. In the perceived academic hierarchy of the British mind, universities look down on secondary schools, which look down on primary schools. For their own reasons (to do with funding amongst other things) universities are piling into school research right now. Their presence is welcomed, their expertise in research design and analysis is vital. However, they generally have slightly less interest in actually improving school education as opposed to measuring things designed to improve it. Teachers want to improve things; this is a noble and arguably more important goal and we need ensure the ‘doers of teaching’ retain control over the ‘measurers of education’.
Research should be as non-intrusive as possible. Ditto above, academics love hefty research design. Schools will quickly become sick of research projects if they intrude too much upon the actual functioning of the school day. Design light projects which are lean and clean.
The real opportunity of a school research project is for teachers to become leaners again. A school should adopt an agreed ‘active-learning cycle’ (e.g. Kolb, Lewin, Honey and Mumford) as the structure through which all its research activities are driven. By adopting a common format for eliciting research questions, implementing studies, applying and evaluating study data, schools will engage staff and pupils with the project’s goals, stages and benefits. Research projects will be coordinated and heuristic not random and whimsical.
Research should enrich the story. Teachers are social constructionists, telling stories that will capture the deeper participation of students. Live, ongoing research enriches that story, makes it fresh, open, edgy… I was asked recently by one school Head participating in one of our Human Ecology Education studies ‘So presumably what you expect to find is X and Y…’ My reply was ‘Not at all! I’m not assuming anything… this is why we are doing the research- to find out!.’ Research actually discovers new things which is what makes it exciting as part of a school’s life. The null hypothesis may be just as interesting and important as the one we expected to find. Heads should be sharing the current research, questions and findings, with their students all the time… it’s better than watching spirogyra bubble oxygen in test tubes.
Someone wise said ‘The unexamined life is a life not worth living’. Educational research is about finding out what works, but it is also about much more than that. It is not, in the end, about collecting more data, or obtaining more knowledge; it is about regaining wisdom, the wisdom to derive pleasure, care and pride in the crafting of education.
A score chart to plan your educational research study
At Human Ecology Education, we have designed a rough rule-of-thumb set of criteria against which experimental designs can be scored. The higher the score, the stronger the study data is likely to be. I have suggested a score above 40 is an acceptable minimum. I hope it will help school-based researchers identify aspects of a proposed study that they can strengthen, giving them confidence in their ability to do research of genuine value. Feel free to take and use it as you see fit!
About Simon P Walker
Dr Simon P Walker is a man who has done too many degrees. After teaching in the humanities, science and social sciences across several research institutions he is now Director of Research at Human Ecology Education. He is a Visiting Fellow at Bristol Graduate School of Education and his specialist field is the regulation of Cognitive Affective Social (CAS) state in students and its impact on educational outcomes.