Make do and mend. It was my Nan’s mantra. She lived through the shortages of the Second World War, and the rationing that extended well beyond, and she just couldn’t shake it despite living into the 1990s. Make do and mend she certainly did, which is why nothing ever got discarded, no matter how outdated or unfit for purpose it had become. My poor Grandad would thus go walking in the rain looking like Captain Scott, weighed down by a patched-up heavy woollen pullover and canvas smock – all the rage for intrepid adventurers at the turn of the 19th century but, sadly, not fit for purpose in the era of Gore-Tex.
Luckily, my Nan was not involved in the modernisation and improvement of our examination and assessment system, but she might as well have been. Make do and mend seems to have been the principle underpinning successive attempts to fix the system by which our school leavers are judged and ranked. It is time for a dramatic re-think in our approach.
It is not as if the alarm bells haven’t been ringing loudly enough. Rarely, if ever, do we have consensus across the educational spectrum about important issues. This is a glorious exception. Everyone, it would seem, knows that the exams our children take do not support the education we know we need to give nor give them the skills employers value the most, yet we continue to tinker and patch up a system which is now woefully outdated and getting more so as the pace of change quickens.
Throughout human history, the nature of assessment of individuals has been remarkably nimble: adjusting to each era to respond to changing needs. Our early hunter-gatherer ancestors would look for bravery, strength and co-operation. All very important in bringing down a woolly mammoth! Assessment was brutal but simple. If you didn’t have these characteristics – if you happened to be timid, weak and uncooperative – you wouldn’t last too long in the group and your genes would disappear along with you.
During the age of agriculture what mattered was having one skill and doing it well. A ploughman hiring a plough hand would worry only if his new employee could hold a plough straight and true for a day, and it didn’t matter a jot if in the evening he wrote sonnets to his true love or could kick a pig’s bladder clean across the yard! For centuries having a single skill and performing it well was enough to keep you fed and clothed for life.
Then as the age of information dawned and humanity moved from the field to the office, two skills became incredibly important: method and recall. In the absence of computers, the ability to hold large amounts of knowledge or information and then apply this knowledge to a task was paramount to success. The more skilled the job the greater the need to use these two skills and the more important it became to show that you had them. So in 1858, a system of public examinations was introduced in order to assess and rank school leavers in these two aptitudes. Children would go into a large room where they would pen answers to examination which were split into subjects: English Language, English Literature, Mathematics, Geography, Latin, French German, Sciences, Art, Music and Religious Studies. Recognise this? You should, because it is basically the same system that we employ today. Like my Grandad’s de-mob suit, it has been patched up and re-shaped many times but underneath it is still the same old suit. We have made do; we have mended. Children remain valued and recognised largely for their ability to recall knowledge and apply method.
The modern world has changed beyond all comprehension since 1858. A time traveller from the mid-19th century would be discombobulated by almost every aspect of the world we live in but would take comfort from the familiarity of the education system, and even more so from the nature of the examinations that mark the end of a child’s schooling. Should our time-traveller stumble upon an examination room in June, they would feel very at home.
If it ain’t broke, don’t fix it. Except it is broke and we need to take a radical approach to fixing it.
Fast forward now to 2015 and a report by Oxford University and Deloitte suggesting that huge numbers of professions and jobs are at risk to automation and digitisation in the next two decades. Robots are in and humans are out! At least, they are in the jobs that require certain skill sets that robots can do particularly well: method and recall. Precisely the same skills that each summer we ask our children to demonstrate during two months of stressful and potentially life-changing assessment.
The good news is that whilst computers are remarkably good at some things, they are also remarkably inept at others. They can’t adapt – at least, not outside of a small range (try asking a robot on the car production line to diagnose an illness). They have no emotional intelligence (you wouldn’t ask a robot to support an employee with a stress). They are incapable of leading (would you want a computer running your company?). Public speaking, critical thinking, creative writing…I could go on. In fact, there are a huge number of aptitudes computers don’t have which in the digital age are becoming increasingly valued and important. The latest study may well prove to be as accurate as an episode of Tomorrow’s World in the 1970s but the core principles strike a chord with me. It is the same message I hear time and again when I talk with employers: don’t send us robots, they say, send us school leavers with skills that are truly relevant.
People like Richard Branson and Michelle Mone are often cited as great examples of success against the odds having left school with no qualifications, but really this shouldn’t be a surprise. Their strengths lie in different areas – areas which matter today. They possess skills like adaptability, leadership and empathy and independence of thought.
It follows therefore that we need an education system that nurtures and assesses these skills, but instead we’re stuck with a system designed for a different era. The very best schools will continue to educate in a way that values soft skills knowing that, despite the lack of recognition in the assessment system, the skills their students learn will hold them in good stead for their futures. Don’t be fooled, however, into thinking that this is the norm. Despite the oft-heard rhetoric, most schools – independent and state – driven by the funding imperatives of featuring high in the league tables, adopt a safe and narrow approach which they calculate will provide the greatest chance of success in the summer round of testing.
A good starting point would be to have exam markers who are trained, qualified and have the time to spot truly creative and original thinking. Most importantly, we should take a completely fresh look at our national examination system starting with a vision of where we would like to be before working backwards to create the system that best supports it. It is a huge undertaking but not impossible; the International Baccalaureate, for example, already goes some way to achieving this aim by placing emphasis on skills such as critical thinking, service and independent learning.
At Wellington College we say “do not ask how intelligent is this child, but how is this child intelligent?” Until our examination system does the same we are doing tomorrow’s generation a great disservice. It’s time to stop the make do and mend approach and create a system which recognises and values the skills required for a modern world.
From September 2014 Wellington College will enter a two year partnership with Research Schools International led by Harvard Graduate School of Education Faculty to explore the broader topic of independent learning, specifically the areas of Growth Mindsets, Resilience, Grit and Active Learning. We will also be working closely alongside partner schools from our Teaching School Alliance in this process.
The initial direction of the project was decided in the Summer term 2014 through consultation with all staff at the College by a survey conducted by Harvard.
The project has three broad stages:
1. A comprehensive literature review of main areas and dissemination to staff.
This will be presented by Harvard GSE faculty to all staff and will provide the starting point for our enquiry. Strands and emerging themes from this review will be used to facilitate discussion at both a school and network level. Engaging with the wider evidence base is a vital part of this process and means examining not only what has been written in the field to date promoting independent learning but also examining its criticism, and alternative perspectives.
2. Collection of baseline data from all students, detailing exactly where students are in terms of the four areas outlined above. This will be in the form of a quantitative and qualitative survey designed by Harvard to capture students attitudes and mindsets to independent learning. The survey will be trialled with a group of student research fellows to test efficacy and appropriateness.
It is important that we have a big enough sample size so we will collect baseline data from three additional school from our Teaching School Alliance.
The findings of this research will be analysed by Harvard GSE and delivered to schools in Summer 2015 to inform choice of interventions in year 2.
3. Trial and evaluate interventions.
Based on the baseline data and what we have learned about independent learning, we will decide in consultation with Harvard GSE and our partner schools what interventions we might trial in year 2 to facilitate independent learning, to inform teaching practice and to improve student outcomes. These interventions will be then trialled and evaluated for impact and efficacy. It is planned to use multiple approaches including a randomised controlled trial.
There will be a launch event for this partnership on September 10th at Wellington College with a presentation from Harvard GSE faculty including Christina Hinton and Bruno Della Chiesa. All are welcome.
There will be a launch event for this partnership on September 10th at Wellington College with a presentation from Harvard GSE faculty including Christina Hinton and Bruno Della Chiesa.
by David Walker
This conference drew delegates from around the world, for an analysis of what is rapidly becoming a global movement. With hundreds of people in the room, John Hattie introduced his 3 themes: understanding learning, measuring learning and promoting learning.
Throughout the day the reality was that there were other pervading ideas: the SOLO taxonomy was extolled as the holy grail (as a way of moving learning from ‘surface’ to ‘deep’), Dweck’s growth mindset received its fair share of positive press, and the benefits of making students struggle (in ‘the learning pit’) was mentioned time and again. In contrast, ideas like VAK were wholeheartedly lambasted.
In his keynote speech, Hattie made it clear that the job of the teacher is to facilitate the process of developing sufficient surface knowledge to then move to conceptual understanding. And this is teachable. The structure that this hangs off is the SOLO taxonomy: one idea, many ideas, relate ideas, extend ideas (the first two are surface knowledge, the latter two are deep). Another way of looking at this is that students should be able to recall and reproduce, apply basic skills and concepts, think strategically and then extend their thinking (by hypothesizing etc.)
So that’s surface and deep. Next Hattie described knowledge in terms of the ‘Near’ and the ‘Far’, i.e. closely related contexts or further afield relations – he proposed that our classrooms are almost always focused around near transfer. Hattie finished his keynote speech by briefly outlining 6 of the most effective learning strategies:
- Backward design and success criteria. ES=0.54 (with ‘Outlining and Transforming’ the most striking at 0.85, although he didn’t really say what this actually meant). More straightforwardly, worked examples are at 0.57 – for me, as a Physics teacher, this is critical. Finally, concept mapping entered the hit parade with an ES of 0.64. Hattie then went on to discuss flipped learning, which he seemed quite positive about, perhaps because the effect size of homework in primary schools is zero – which he spun to be a positive: “What an incredible opportunity to improve it”.
- Investment and deliberate practice. ES=0.51. Top of the table here was ‘practice testing’ (even when there is limited feedback). Hattie thinks that the key to this is that students are investing in effort. “We need to get rid of the language of talent”, including setting etc. Dweck’s mindset work was repeatedly referenced during the day, including an interesting idea about the dangers of putting final work on the walls – perhaps we should decorate our rooms with works in progress? But how do we make the practice that they do ‘deliberate’. Another author repeatedly referenced was Graham Nuthall and his work on needing 3 opportunities to see a concept before we learn it. I thought that it was interesting that Nuthall was given such a glowing report when his book ‘The Hidden Lives of Learners’ includes relatively little in the way of attempting to measure and quantify his conclusions. His conclusion to this section was the catchphrase: “How do we teach kids to know what to do when they don’t know what to do?”
- Rehearsal and highlighting. ES=0.40. Some strategies here: rehearsal and memorization, summarization, underlining, re-reading, note-taking, mnemonics, matching style of learning (in order of effect size, with the latter at ES=0.17). The key here is to get kids to get sufficient surface knowledge so they can use their (limited) working memory to do the far learning. I thought it was interesting that matching learning styles gets such a bad press when it does, according to this, have at least a small positive impact.
- Teaching self-regulation. ES=0.53. Reciprocal teaching – not just knowing, but checking that they know why.
- Self-talk. ES=0.59. Self-verbalization and self-questioning.
- Social Learning. ES=0.48. The top effect is via classroom discussion (at 0.82) (Hattie stressed that this should not be a Q&A, but an actual discussion).”When you are learning something and you’re still not sure, then reinforcement from classroom discussion is the biggest effect”…but if the discussion is of something wrong, then people are more likely to remember it. The most memorable quote here was that “80% of the feedback in the classroom is from peers…and 80% is wrong”.
- What about Direct instruction? ES=0.6. The important thing is sitting down with colleagues and planning a series of lessons. And then jointly discussing how you are going to assess. “If you go out and buy the script, you’ve missed the point”. Constructivist teaching only has an effect size of 0.17. Guide on the side leaves the kids without self-regulation behind. This resonated with the work of David Didau (the learning spy). Interestingly, ‘problem solving’ has negligible effect size, but ‘problem based teaching’ has a large ES.
- And what about IT? Technology is the revolution that’s been around for 50 years and has an ES=0.3. Teachers use technology for consumption purposes, e.g. using a phone instead of a dictionary. That’s why the ES is so low. If you use technology in pairs, then the ES goes up. Why? Because they communicate and problem solve; i.e. use it for knowledge production. Three linked concepts were mentioned: the power of two. Dialogue not monologue. The power of listening. Compare this to the quip: “Kids learn very quickly that they come to school to watch you work”.
- Feedback? The question of feedback is not about how much you give, but how much you receive. Most of the feedback is given, but not received. Students want to know “Where to next?”, so we should show another way, giving direction. This is incredibly powerful. “How do teachers listen to the student feedback voice, to understand what has been received?” This is at the vanguard of Hattie’s current research.
- Error management? Typically errors are seen as maladaptive…and teachers create that climate: solving the error, redirecting to another student, returning the correction to the student who made the mistake, ignore the error (although hardly ever). Hattie sees errors as the essence of learning. He mentioned the teaching resilience as an example of best practice.
Session 1: the Visible Learner (with Deb Masters)
In her work with John they have developed a model for measuring the effect of feedback and asked the question, how do you take the research and put it into a process in the schools? She called this ‘Visible learning plus’. We were asked to come up with our ideal pupil characteristics: questioning, resilient, reflective, risk takers. And the least ideal: not proactive, defeatist. No surprises there, then.
Deb defined visible learning as “when teachers SEE learning thought he eyes of the student and when students SEE themselves as their own teachers.” So the job is to collect feedback about how the students are learning.
Deb defined visible learning as “when teachers SEE learning thought he eyes of the student and when students SEE themselves as their own teachers.” So the job is to collect feedback about how the students are learning.
We also need to develop assessment capable learners (ES=1.44). What does this mean? Students should know the answers to the questions…Where am I going? How am I doing? Where to next? Students should be able to tell you what they will get in up-coming assessments.
This workshop slightly lost its way towards the end as time ran out. We quickly looked at the use of rubrics to develop visible learners, and I was struck by the links with the MYP assessment structure.
Session 2: SOLO taxonomy (with Craig Parkinson – lead consultant for Visible Learning in the UK)
This is based on the work of Biggs and Collis (1982) and was an interesting and practical session. Much of it was based on the ‘5 minute lesson plan’ (which I remain unconvinced about, despite liking the idea of focusing on a big question). The key is to design and plan for questions that will move students from surface to deep learning (one idea, several ideas, relate, expand). SOLO was the preferred model here, over the well-established Bloom taxonomy. I was sitting next to Peter DeWitt whose blog ‘Finding Common ground‘ expands on this.
Session 3: Effective feedback (Deb Masters)
“If feedback is so important, how can we make sure that we get it right?” For feedback to be heard the contention was that you need “relational trust and clear learning intention”. I agreed with the former, but am less convinced by the latter. What do students say about effective feedback? “It tells me what to do next”. Nuthall was mentioned again – 80% is from other kids, and 80% is wrong. Why is there such a reliance on peer feedback? Students say that the best feedback is “Just in time and just for me” … and interaction with their peers is a good way of getting this.
Deb used the golf analogy to discuss the levels of feedback:
- Self … praise (“cheerleading does not close the gap in performance”).
- Task … holding the club etc. This is often where teacher talk features the most.
- Process … what do you think you could do to hit the ball straighter?
- Self-regulation … what do you need to focus on to improve your score?
The idea is to pick the right level at which to give the feedback.
Can we use the model to help the pupils to give each other and us feedback? I was particularly struck when one delegate from a large school in Bahrain suggested that they are experimenting with the use of Twitter to get instant feedback about the teaching in real time!
Keynote 2: James Nottingham: Visible Learning as a new paradigm for progress
James started with a critique of the current labelling practices that occur in schools. For example, every single member of the Swedish parliament is a first born child, and 71% of September births get in top sets compared with only 25% of August births: “Labelling has gone bananas … if you label pupils then you affect their expectation of their ability to learn”.
Eccles (2000): Application = Value x Expectation
Again, progress should be valued rather than achievement. How do we go about getting this … what is the process involved?
The ‘learning pit’ was discussed (Challenging Learning, 2010). Often teachers try to make things easier and easier…the ‘curling’ teacher (push the stone in the right direction and then desperately clean the ice to make it easier for it to go further). I liked that analogy. James (rightly in my view) said that our job is to make things difficult for pupils, after all “Eureka” means “I’ve found it”. I’m sure his book will expand on this, but his basic structure was:
- Conflict and cognitive dissonance
Some thoughts from the day
- The key message that came through from the whole conference was that everything has to hang off the learning objectives / the learning intentions. Is this just because their research requires a measurement of outcome? This is performance, but not necessarily learning. The question is whether the interventions that Hattie has found apply to effective classroom performance and learning…or just performance? I was struck by the contrast between this and what Didau talks about.
- Throughout the day there was an interesting use of instant feedback – point to one corner of the room if you know about x and the other corner if you don’t.
- Hattie recognizes that we are extremely good at the transfer of ‘near’ knowledge, but not good at the ‘far’ … and that is okay: we shouldn’t throw out the baby with the bathwater.
- “It’s a sin to go into a class and watch them teach … because all you do is end up telling them how to teach like you”. You should go into the class to watch the impact that you have.
- Should we stop the debate about privileging teaching?
- Can we plot a graph of achievement against progress for our students? This can allow you to make interventions with the drifters.
- How do we measure progress?
- Do we have enough nuancing of assessment levels?
- Hattie: “What does it mean to have a year’s growth / progress? We have to show what excellence looks like. Proficiency, sure, but the key is the link with progress.”
And one final thought: “Visible learning into action” will be out April – June next year to show how this might be put into practice in schools.
Work smarter. Be efficient with your time. Use technology wisely. Downloading Turnitin as an app for my iPad allowed to do all three in one easy hit. Existing practice would have been to download, and print out each uploaded document one at a time, to be marked by hand. Now, using this e-marking version, time taken to mark was halved, at no expense to the accuracy or detail of feedback for the student.
After downloading the app, which took a matter of minutes, my existing account was downloaded almost instantly. The front screen looks like this.
After selecting the relevant class, all students appear in a list, separated into who has and has not uploaded their assignment to Turnitin.
Marking a piece of work is simple. On selecting the pencil icon for a student, their work appears with the originality report shown; different colours corresponding to a different source. You can switch this off in a slider near the top right hand corner. Being touch screen, you can insert comments precisely where you want to. A comment box appears on touch, ready for a comment to be typed. The box collapses to a speech bubble once written. Both are shown here.
There are also a number of pre-designed errors/improvement to use if you like, such as sp. for spelling mistakes.
Student work downloads well and tables …
… and graphs …
appear as they would when scanned, ready for comments to be added anywhere as appropriate.
To complete the marking, you simply select the pencil icon, which is now in the top right hand corner of the screen, where several options appear on the ‘grade overview’ screen. I tend to write a general comment in addition to the feedback given throughout the text, and don’t take advantage of the voice comment – though that is available. A combination of always refusing to accept that my voice sounds the way it does, and my assumption that students will be too busy laughing rather than listening to the feedback, results in me ignoring this particular function.
There is, however, an excellent rubric function, so you can upload rubrics designed by you on the website – specific to any course – which will appear under the ‘open rubric’ button in the top left hand corner of the grade overview screen. Just above this is where you type in the number to grade the piece of work as appropriate. You can put more than one number if required. In the example show, I have given to marks, corresponding to two criteria from the MYP curriculum. The grade overview page as discussed looks like this.
If I did choose to print out the document once graded, all comments appear in an appendix, numbered, at the end of the originality report. This is particularly useful for moderation reports, where evidence of clear, accurate marking is obvious.
Finally, and importantly for me in a world where I can never be quite certain that wifi connection will be available, the Turnitin app offers me a solution. Once in an area of wifi, you can download all pieces of work – taking about 10 to 20 seconds per piece of work – so that I can complete the marking anytime, anywhere, without the need for wifi. The feedback will then upload once refreshed. The total time taken to mark the work was certainly reduced for me, and there was no wasted paper either – even the environment approves.
Last term, being out of school at a conference, I had to set cover.
Feeling guilty about missing my A2 English lesson (especially as this was early in the term), I made sure that they would be occupied and challenged in my absence. I asked them to come along to the classroom (in my absence) and to bring their computers (we run a BYOD programme and have school-wide Wifi) and a copy of the text we were currently studying: Joseph Conrad’s Heart of Darkness.
I set up a Google document that the whole class could edit, sent the link and these simple instructions: read one paragraph at a time, after reading each write two or three questions that you want answering then have a go at answering (as best you can) some of the questions that other members of the group have asked. All questions and answers were to be typed into the same shared document.
Sitting in one of the sessions at the conference, I opened the Google document on my iPad and started watching. An amazing thing happened: an extraordinarily rigorous conversation and debate started unfolding on the page in front of me. And I was able to take part: to answer questions, to challenge ideas, to affirm great readings (I know – I should have been concentrating on the conference presentation).
When I got back to school in the afternoon I caught up with some of the students who were positive and hugely enthusiastic about their ‘virtual’ lesson. The levels of engagement had been extraordinary. I was interested to see if there were further applications of this method (beyond providing virtual cover!)
The next day, Tuesday, I was being observed teaching an AS set by a New Zealand teacher looking at the use of technology in various British schools. I felt duty bound to ‘do something with tech’, and decided to repeat yesterday’s experiment.
The class were studying Robert Frost and we spent about 30 minutes reading and discussing the wonderful “Stopping by Woods on a Snowy Evening”. No technology in play except paper and pens. We then ran a 20 minute “virtual conversation” along exactly the same lines as the day before: one Google doc, all students editing, questions and answers.
But why do this with all students and the teacher present in the class rather than just have a discussion? Well, here was some of the feedback from the students at the end of the lesson:
- They really liked the relative anonymity;
- They felt they could work at their own pace;
- Several commented that they could ‘go back’ to a discussion and add comments and noted that in ‘real time discussion’ the class would often already have moved on;
- They enjoyed adding to and qualifying each other’s ideas in a way they felt they wouldn’t necessarily always be able to do in conversation;
- They loved the fact that they ended the lesson with a full transcript of the questions and answers (which they reviewed for homework, before writing up a reflection on their blogs).
My New Zealand observer (another English teacher) also started taking part in the online discussion. It was almost infectious. There was a buzz of engagement in the room.
I’m not without reservations: I certainly wouldn’t do this all the time and classroom discussion will remain a key element of virtually all of my lessons. I also worry slightly about the lack of moderation (how do I help the students to understand why some responses might be more successful than others?)
Nevertheless, I will definitely be using this relatively simple tool again. A class collaborating on one document is powerful. It’s a good example of technology letting me do something I couldn’t do a few years ago with, hopefully, tangible benefits for students’ understanding and engagement.
At the beginning of this term we set out to experiment with some new models for CPD or Professional Learning. One of the things we were keen to explore was whether simple technological models had any value in engaging teachers in discussion about teaching and learning that would help them to improve their practice in the classroom. We also had a hope that there might be better alternatives than the “all staff meet in one place at a particular time and listen to a lecture” model.
As one of the points of focus for the term was assessment and feedback, we set out to create a Google+ community to host a series of discussions. The idea was that anyone could sign up and that there would be no time-specific sessions so that individuals could interact when and how they wanted.
At the beginning of the course the “moderator” laid out a set of user principles for engagement over the 3 weeks that the community would be running. His short post read:
“Improving our assessment and feedback right across the school is a key focus for this term so this community could help to shape ideas and forge interesting ways forward as well as sifting our contacts and networks for examples of the very best that others are doing in this field.
I’m not sure what the outcomes will be (although I hope that a couple of you at least might be motivated to blog about our discussions and ideas). However, might I suggest the following as a kind of minimum requirement of being involved in the group? That, over the course of the 3 weeks we all:
- Post at least one link to an interesting idea/ blog/ link;
- Post at least one (however short) personal reflection on effective feedback and/ or assessment.
And that, in addition, we all:
- Comment at least once a week on any of the posts that have gone up.”
28 teachers signed up for the community and it remained open for the 3 weeks intended at the end of last half term. There was a fantastic range of curricular posts from English to PE to Maths to Economics and a decent discussion on many of them. Posts looked at a huge range of thoughts and ideas as well as curating blogs and recommending reading; the 3 headings for contributions were: general discussion; interesting blogs, websites and reading; reflections on interesting and innovative personal or departmental practice.
We discussed, as examples: the issues of grading work; ensuring the quantity and quality of all feedback; student reflections; using trackers; using digital strategies to support marking and assessment; peer-marking; critique; blogging as an assessment strategy, and much more. In fact, and on reflection, perhaps the range of material covered was almost too extensive and we might have been better focusing more on specific ideas and practices.
At the end of the course we sent out a very short survey to try to engage with how successful teachers felt that the process had been and whether this is a model that we could develop profitably in the future. The responses made for interesting reading.
Teachers who took part liked:
- “The flexibility to interact at a time that suited me and therefore allowed me to give it my full focus at that moment”
- “Learning about what was happening in other departments”
- “the ease of access and sharing ideas”
One of the feedback questions we asked was to challenge teachers to consider whether, in their opinion, the course had had any impact on student learning. The responses were surprisingly positive, several reporting that they felt there had been a significant positive impact on student learning in their classrooms.
A couple of respondents suggested a plenary session would be useful. This is something we will consider in the future; a Google hangout is obviously one way of conducting this that we might think about. Several contributors thought that more interaction would have made for an even more successful experience. Balanced against this, however, was an exhortation not to exhort: that requests to post made the experience feel “pressurized, and not natural.” That’s an interesting balance for future moderators to consider. Equally, some commented on the lack of quality control. That’s another one to think deeply about.
To finish: one final comment from one of the teachers who took part: “I am disappointed there is not a similar group next half-term looking at another topic.” Now it’s time to make that happen; step forward the next moderator, please.