Talking D&T

Shaping D&T Assessment: Neil Wright's view of recent research

February 06, 2024 Dr Alison Hardy; Neil Wright Episode 136
Talking D&T
Shaping D&T Assessment: Neil Wright's view of recent research
Talking D&T +
Exclusive access to premium content!
Starting at $4/month Subscribe
Show Notes Transcript Chapter Markers

This week I chat with Neil Wright, whose unique path from the electronics industry to education brings a fresh perspective to his students at William Farr School. He talks with me about his attendance at the PATT-40 conference, sponsored by WF Education Group, where he had the opportunity to learn from and talk with D&T educational thought leaders.

In today's  conversation, we delve into the world of Adaptive Comparative Judgment using RM Compare software.

The episode culminates with me encouraging Neil to do some research on his own practice to share with other teachers. We also consider the sharing of research within the D&T educational community.

(Text generated by AI, edited by Alison Hardy)

Links

Paper

Buckley, J., Seery, N. and Kimbell, R. (2023) “Modelling approaches to combining and comparing independent adaptive comparative judgement ranks”, The 40th International Pupils’ Attitudes Towards Technology Conference Proceedings 2023, 1(October). Available at: https://openjournals.ljmu.ac.uk/PATT40/article/view/1570 (Accessed: 25 January 2024).



Ciaran Ellis posted a thought-provoking question on LinkedIn recently: Do design decisions involve value judgements?

What do you think? Join the conversation over on LinkedIn and let us know what you think. 


Support the Show.

If you like the podcast, you can always buy me a coffee to say 'thanks!'

Please offer your feedback about the show or ideas for future episodes and topics by connecting with me on Threads @hardy_alison or by emailing me.

If you listen to the podcast on Apple Podcasts, please take a moment to rate and/or review the show.

If you want to support me by becoming a Patron click here.

If you are not able to support me financially, please consider leaving a review on Apple Podcasts or sharing a link to my work on social media. Thank you!

Alison Hardy:

This week's episode is with Neil Wright, who's a teacher in Lincolnshire. I'm going to let him talk a little bit more in a moment about where he is and what he does, but this is part of the PATT 40 series, where people who attended the conference, spoke at the conference, ran the conference, are coming along and sharing some of their experiences and learnings. Beyond learnings what sort of word is that? Beyond the conference? The conference happened late October and in November 2023, and these are coming out in 2024. So I've asked Neil to come along. He's a teacher and he was at the conference. I'm really keen to hear his viewpoints. But first of all, neil, would you like to share who you are, where you are and what you?

Neil Wright:

do, Sure. So I work at William Farr School, which is, like you said, in Lincoln, North Lincoln. I teach mainly on the engineering side of things. So I've been a teacher for around about 15 years. Prior to that I was in the electronics industry and I was in that industry for around about the same amount of time, about 15 years.

Alison Hardy:

Right In case, you've kind of brought all of that industrial experience into this. I think you were saying, before we hit record, that this is your third school that you've taught in.

Neil Wright:

Yes, it is, yeah, yeah. So this is the one I've been at the longest, so it's it's definitely one I like to be at. It's very local to where I am, so I've got, I've got my family, who also attend the school as well, so it's a nice friendly feel to it.

Alison Hardy:

Yeah, it does have William Farr, and that's that's a high compliment, isn't it? If your children attend the school, you must be feeling good about it. I'm not. I'm going to dig myself in a big hole there, so I'm going to stop. But yeah, so yeah, I kind of know William Farr because I was teaching at CASE DR school as head of design and technology at CASE DR school, well over 20 years ago actually now. If I do those sums, so, yeah, I'm kind of not up to date, familiar with William Farr anymore. So you've come on because you were at the conference. So can you kind of tell us a little bit about how you got to the conference, because that's kind of quite interesting.

Neil Wright:

It is. I mean, I sort of wrote sort of an application and it was for a scholarship and it was something that was kindly donated by WF Education Group, which most people would probably know better as technology supplies or TSL, and initially I didn't think I'd be at the conference. I sort of they told me I was second or third place and then, I think, somebody pulled out and I got there in the end. So through both their support and their kind financial support and the support of my school, then, yeah, we got there. It was great.

Alison Hardy:

Excellent. Yeah, so that's a big shout out to WF Education and Technology. Surprise, and I'm surprised. I know Matt was behind getting that scholarship organised and a group of us were involved in looking at the different applications. So now you're making me feel guilty that you weren't. I've met you in here. But also yeah to a supportive head teacher. That's really good in allowing me to have that time. I presume it was during term time for you it was.

Neil Wright:

It was straight after half term, so it was literally first hour back. I need to leave on a train tonight, can you, can I go? And it was very good you organised some emergency cover for me for the week and, yeah, we were off straight after school.

Alison Hardy:

That's brilliant. Oh, that's fantastic. So you were there for the whole week, from the Tuesday to the Friday.

Neil Wright:

That's right. Yes, so I arrived on the Monday night and leave on the Friday afternoon, so it was a great week.

Alison Hardy:

So how did you feel about the time you got to Friday?

Neil Wright:

Absolutely knackered it was. It was absolutely exhausting. I think you go into sort of brain overload. There's so much to absorb, especially for someone of my limited cognitive ability. It's sort of there's so many different angles it comes at you from, and I'm not sort of a researcher, I'm not very academic, I'm more of a practical person, so I like to be able to apply things. So some things I found to be more easy to grasp and say, yeah, that's relevant. I'll. I'll focus on that one and normally maybe I incorrectly did it by just looking through the titles and sort of seeing whether there's anything that I liked based upon that. But I think in retrospect there were other things that when you start to talk to people, when you sort of mingling and you're discussing things with other delegates that maybe I should have attended as well, it was very useful to read up some of the papers that I did miss.

Alison Hardy:

Yeah, and you can always email people. That's what I would encourage you to do If you want to know more. Any of those researchers will be thrilled if you email them and say can we jump on a team's call and have a chat about your research and practicing teaching? I'm curious to see how that could be applicable. They would bite your hand off and if they don't, then they're just not worth talking to. But you know, as an academic and right, first of all, I'm going to pull you up.

Alison Hardy:

You don't take your time on e-cognitive ability. It's just a different way of thinking in this. I remember the first Pat conference I went to in 2011. Like this one, there was loads of papers and it's like how do you select, how do you choose? And you have to go by the title.

Alison Hardy:

Some of the titles are good and some of the titles are just like kind of sexy. Do you know what I mean? They're trying to capture your attention. Some of them, you're just like. I have no idea what you're talking about, and so you read the abstracts and you're relying all the time on how good the person who's written it is communicating what it is that they're doing. Some academics do that really well, some researchers do that well, some don't do that very well, but it is a key thing. So, yeah, I'm not surprised. But when you talk about cognitive ability, it's just a way of thinking in this and I think it's really exciting that you've got from it things where you can think about your own practice and reflect on that. So you've picked out a particular paper I have, yes, which is it's got a lovely short title.

Neil Wright:

It has. Do you want to go for it? Modelling approaches to combining and comparing independent adaptive comparative judgment ranks, so otherwise known as ACJs. So this was by Buckley, siri and Kimberley from the University of Shannon.

Alison Hardy:

Yeah, so Jeff Buckley, nile Siri, richard Kimball. So Jeff is going to give a bit of background. Jeff has been quite a lot of the pack conferences over the last probably five to 10 years. He was Niles doctoral student, so Nile is also over in Ireland. Nile was actually my examiner for my PhD. And then Richard Kimball is the God of assessment of design and technology. You may have come across him when you were doing your teacher ed, but yeah, richard's, like you know, got the history. So he was involved right at the beginning, in 88, 1988 onwards, in what the subject was as it came to national curriculum and also how to assess it. So he's like a real leading authority on this. So, yeah, so I gather that it was Jeff that did the presentation. Is that right it?

Neil Wright:

was yeah, and he was, he was. He was excellent. He was an excellent communicator. So, he got his points across very well, yeah.

Alison Hardy:

Yeah, so do you want to give us some? I put links in the show notes to the paper so people can download it. And it is also the great thing about Jeff is it's written. You can follow it. I mean, it's quite statistical, isn't it in faces?

Neil Wright:

It is, but the overall gist of it, the core concept is is the bit that I've really got bitten by? Yeah, I'm. I didn't get into the murky depths of the way of allocating models to judges and things like that and the deep statistical models. It was the overall concept of how it worked that I basically stuck with me.

Alison Hardy:

Yeah, so do you want to give us a bit of a synopsis of the paper, like a bit of a summary of what it is?

Neil Wright:

Yeah. So, first of all, ACJs are not something new. They've been around a long time. So there's sort of commercial tools, such as no more marking that people may have heard of, which have mainly been for things in the written space rather than in the sort of design idea or more stem arenas for music or for art or for designs. And the first thing it brought to my mind was the use of a new piece of beta software by RM software called RM Compare.

Neil Wright:

So there's two versions of this. There's sort of a basic version that allows you to just put things into rank order, so you can basically put two pictures or two designs up side by side and you just basically say which one's the better one. And the idea is that you, by allocating to more and more judges, whether it's PMarkas or a big group of teachers, then you can remove any unwanted bias or any unconscious bias within the judges, and you can also eliminate or identify, however you want to put it, any misjudging activity that may be subconscious or through lack of training or misunderstanding by the judges themselves. So by doing this, by doing this on screen, two pieces of evidence at a time, side by side, and then saying that one's better, that one's better, and then you go through maybe 10, 20 rounds of judging. Then you eventually come up with a rank order, so something which is on a logarithmic scale that basically tells you which one's the best, which one's the worst, but it doesn't actually tell you a level or a grade of any sort. And the paper basically discusses then taking that one stage further, which is where the advanced version of the RM Compare software comes in, and it discusses using rulers, so taking this logarithmic scale for representing the ranking of work that's been judged and then transferring it to a linear scale.

Neil Wright:

And then, once you've transferred it to a linear scale, you basically got something between 0% and 100%. Then it still doesn't tell you whether it's everything was within a certain grade boundary. It tells you the average point. It tells you sort of it gives you a comparison point between the average so you can see which one's better than average, which one's smaller than average.

Neil Wright:

But it may be that that average point is basically that the cohort you've just marked is all above the average compared to another set that you've marked.

Neil Wright:

So in relative terms it doesn't tell you much, just that for that set of people that you've just marked, the work of that one person is better than another person. So one way around that is that you can influence this by creating this into a ruler. So once you've sort of signed this off and you've said, well, that piece of work at the top of the scale that's maybe worth a grade nine and the one at the bottom is worth a grade three, then you can make that into almost a comparative ruler, so something that you can use to compare against another rank. And then that allows you to sort of say, well, is that a good fit in my rank for another cohort that you may have marked and say, is this a grade three, is it a good fit for a grade three or is it a better fit for a grade five? So you can compare something which has been checked and signed off against a certain grade level, against what you're producing yourself in your own rank order.

Neil Wright:

Yeah, so because it's quite complex, I'm just thinking it is quite complex in basic terms what it allows you to do the way that.

Neil Wright:

What first struck me about it was I also work as an exam moderator. Even outside of that, as teachers, we all get to the May time and it's marking of coursework and we all get around a table. Certainly our school we do, and we all do the internal moderation before things get sent off to the exam board and there's always, you know, you stick to the marking scheme or whatever, but there's always some element of people sticking up and arguing for certain people that maybe there's a personal preference for, and then it's that that sort of gets filtered out and discussed and argued away and rationalized down to something that actually means a good rank order for the what you're going to submit. So in its basic form, then, this ACJ method allows you to basically put some sort of scientific method to that In the paper it was originally discussed for for use with between maths, I think so different schools within multi Academy trusts, so that different departments within schools could sort of rationalize themselves so that there was no bias between one school and another school.

Alison Hardy:

Yeah, it's about schools, it wasn't about maths, because it's set in Ireland.

Neil Wright:

Right, I apologize.

Alison Hardy:

I'm just trying to think yeah, it wasn't, but it was. There were a couple of different schools involved with 13 different teachers involved in doing the ranking and they tried some different processes. So it's the idea was, yeah, I could be in my classroom with my D&T group and I rank them, yeah, and I'm just ranking them against themselves. I'm not ranking them against any criteria. You're in your school ranking your class. How do we then bring that together on a more national scale and have a confidence and that's why they were doing the statistical analysis about the reliability of the process that we're going through. There are all sorts of queries around this, but they've kind of gone through quite a rigorous process and I think, as you say, in terms of thinking about the classroom, it's a process that excuse me if you go through starts to remove some of that subjectivity.

Alison Hardy:

It was quite interesting when you looked at the charts that the highest and the lowest I mean nobody can see I'm waving my arms around here as normal the highest and the lowest. I think I should put some of these podcast recordings on YouTube so we can see how animated I get the highest and lowest. There was a higher degree of God, what the technical word is now If Jeff listens to this, I'm sure Jeff will be shouting it at his speaker. At the moment I'm going to say tolerance, but that's not the right word, do?

Neil Wright:

you know what I mean. You're in the SSR now you know, high level of error.

Alison Hardy:

potentially, Isn't that?

Neil Wright:

So is that the scale separation reliability number? I think the one that. Oh God, bennett, I think, I think. I think I might well be wrong, but I think it is anyway. That's deep, deep, somewhere within the murky statistics that we, as teachers, don't really need to know about. I think the basic premise is that you can.

Neil Wright:

You can do a basic who's best in the class, this one or this one. So, in its basic form, without any statistics, just using it as a rank, you could give this tool to a class of, let's say, year 10 students, so 20 people doing D&T, and they can see between themselves like which one am I going to give the best, the best award to, which is the worst. And then they get the opportunity to see the difference between what's at the top and what's at the bottom. So in a peer review situation, it allows them to sort of say yeah, I can see why mine's at somewhere in the middle. Maybe it's not as good as that one, but it is better than that one. And you get to pull out for themselves. Maybe why? By picking out the good bits and the bad bits.

Alison Hardy:

Yeah, but the key thing is and I knew you made an interesting point because you sent me some notes beforehand is it's about the criteria, isn't it? It's about what is it that's being assessed, and it's very easy to be assessing the way it looks.

Neil Wright:

Absolutely.

Alison Hardy:

But actually having the criteria, and they refer to this in the paper, which I think is something that we don't necessarily discuss in D&T as much as we may be used to. I've done quite a bit of work on it just recently. It's about capability, this holistic construct of capability. So the assessment is we're assessing pupils' design and technology capability, and that's what Richard Kimball and Kay Stable's work at what was a Technology Education Research Unit was all about was assessing capability in a holistic manner. But we need to understand what capabilities, isn't it so if you give it to children or to inexperienced teachers who don't know what they're looking at, what they might look at, the or the better phrase, the prettiness of the work, the neatness of it, for example, yeah, absolutely.

Neil Wright:

So it's all of those things that you can pull out. It can be, yeah, looking above communication factors and into insight, into innovation, if that's the main criteria you want people to ranking on, but knowing that criteria to rank against it, as you say, is the important bit. So you've got to have a criteria that's fixed and firm and steady across all of the judges that are taking part in this.

Alison Hardy:

Yeah, yeah, absolutely, absolutely. And actually I've just got the paper up in front of me and looking at it. They cite a paper by Siri Niles Siri for 2019, which I've just looked up. I'm going to put a link to this in the show notes, which is about this. I mean this is a cracking title. I mean it's really snappy. I'm going to take a deep breath. Integrating learners into the assessment process using adaptive comparative judgment with an obstative approach to identifying competence based gains relative to student ability levels.

Neil Wright:

That rolls off the tongue.

Alison Hardy:

Doesn't it just, doesn't it just? It shows you the complexity of what they're doing. I'm going to put a link to that in the show notes because actually what I like there is it's about focusing on what you know, about involving the learners in this, and actually that paper you might have a read of it meal is actually available to download because it's open access so it's not behind a paywall, which is even better.

Alison Hardy:

Okay, they've done it with undergraduate students, so it's not school children, but it's something that could be, you know, could be used to kind of stimulate some discussion, and I might talk about this in my Thursday episode. It's like a follow up to what me and you talk about. So we've talked a little bit about how this idea about adaptive comparative judgment can be used with children and with teachers. In your experience and your role as an exam moderator, do you see this much within that?

Neil Wright:

Yeah, I mean sometimes I was quite surprised when I first started being a moderator that sometimes you find that some schools absolutely spot on and the rank order that they've selected, which is the main criteria, it's the benchmark that the main stay for everything that happens subsequently with regards to allocation of marks is spot on, but in a number of cases it isn't.

Neil Wright:

And when it comes to moderation, if that rank order is off, then the channels for sort of going back a step and throwing that back to the center become a lot more arduous. So as a mechanism to solidify that rank ordering and give it some scientific basis becomes quite appealing really.

Alison Hardy:

Yeah, yeah, and you can see I mean this is what we used to do a long time ago when I first started teaching. I remember we used to get together and take a sample and physically get together and rank you know work, you know for marking, and somebody from the exam board would come along and take part in that. I remember the first one I did of that would have been in 1994, 1995 and I went along and took some work. My marking was so all over the shop that the moderator had to come to the centre, you know, in the days when he did that sort of thing. Because when you're new to it, when you've not got that experience, if you're a single teacher and there's more and more single teachers in departments you may not have experience.

Alison Hardy:

I was actually working in quite a big department at that time. You've got nothing to benchmark against Absolutely, and so you can be all over the place. So I think there's also a way here of schools working across to do some of that. What we used to do, bringing people together and doing ranking and comparisons. But if you say, this gives it a much more rigorous approach to doing that.

Neil Wright:

Yeah, I mean, all it takes is the exam board to sign off a ruler, to say, all right, this is what the benchmark is for a level four or a level five or a level six. And then you just look and you compare whether it's for an initial idea or for a development or for the quality of make, all of which can sometimes be a little subjective, and you see which is the best fit.

Neil Wright:

And then it's just is it that one? No, I'll go down to the next one, Is it that one? And it's just a straight compare. It's a very easy visual comparison and the main premise of it- is that it's meant to be also a speed up exercise in saving teachers on the time, on the moderation, the assessment side of things.

Alison Hardy:

So, as we come to the end then, nia, what's what your takeaways for you professionally, and what do you think it could be useful for to prompt people to think? What beyond you. Why should people read this paper?

Neil Wright:

Well, I think it's definitely one to keep an eye on. I don't think, in terms of the specific software that that's been analyzed here or used here, which is again is RM compare. I think it's still in its beta phase, so it's something that has definitely got potential and in the next generation, which is currently evolving, the advanced version of the software, this idea of rulers is a lot more solidified and it's a lot easier to use. It's a lot more graphical, so it's definitely something that is evolving. So it's something to keep an eye on, have an idea about, say, a future alternative to the way it's done now. I think it's main usage at the minute for me is for peer review.

Alison Hardy:

Right Within your classroom. Yeah, it's about getting that criteria Right, isn't it? Yeah, okay, okay. So you kind of it's really it's cutting edge, and by attending the conference you had something that's kind of cutting edge about what's happening. Yeah, and this new software that's, as you say, in beta, and then also some practicality about what you might be able to think about doing in your classroom.

Neil Wright:

Absolutely.

Alison Hardy:

Yeah, okay.

Neil Wright:

Good. There are a lot of other things that sort of link into this that were also discussed at the conference. So it's sort of almost not like a one trip pony. So there's papers on, for example, spatial assessment, which again links into this, where you sort of do origami or something which is physically 3D, and then how do you, how do you assess that? So if you can come up with some set of judgmental criteria, then you can. You can rank it one against the other. Who's done the best? So it's just a photograph, you bulk, upload the photographs, a set of people analyze and you come up with a winner for that cohort.

Alison Hardy:

Yeah, yeah, I'm just looking at the Pat conference as you talk about spatial, yeah. So Jeff's, on another one of those papers about spatial ability development and another two of those papers, yeah, but there is a lot to kind of be thinking about and about. Yeah, so some of the more written subjects think they've been leading the way in this, but actually DNT has been leading the way, but maybe in a quieter manner. So it's really good that you've been able to bring this and share this. So, thank you, and I'm going to ask you a curveball question now, which I didn't crime you about have you got any thoughts about getting involved in doing any research or doing any research in your own practice that you might share with us? I don't know if it's a future Pat conference, but internally in your schools or within the DNT community.

Neil Wright:

I would like to. I think maybe, maybe I would start off by every sort of articles for magazines in the past, the dim and distant past. So maybe that's something I'd revive and do again. I'm not sure I'm to the standard of writing something for a Pat standard, but I mean the whole premise. One of the premises of the article I wrote about for forgetting the scholarship was about using module electronics, so getting away from the fact of kids taking things home.

Neil Wright:

So you're doing problem solving with kids that could be reused to keep cost down as one requirement, but also giving the freedom to explore and experiment. So that was one avenue that is a possibility, but I think I'd start off small and then grow it from there.

Alison Hardy:

Yeah, well, again, I'm not saying, do any grand you know, depends what you see as grand you know.

Alison Hardy:

But actually I think anything that any DNT teacher does to try something out in classrooms and evaluate it and share that evaluation in whatever form you know, a short post on social media, a blog post or whatever I think is exactly what we need. You know, it doesn't have to be, you know, a seismic thing, it can be some. I mean, I did my masters, actually, when I was a book acester and as basic chef and a man, and one of my assignments I did some research about was the use of questioning to promote creativity, yeah, you know. And then I did another one on what are the pedagogies we can use when using CAD, cam in DNT lessons. So, again, you know, huge in a way, but actually it was about my teaching, it's about what's happening in my classrooms, what was pertinent, what was pertinent to me at that time. So, yeah, well, if you want, if you want to catch up and chat anymore, now I've got your my raid on here, I'm not going to let that go.

Neil Wright:

Thank you.

Alison Hardy:

If you want to talk first about any of those ideas, I'm more than happy to have that conversation Brilliant.

Neil Wright:

Thank you very much.

Alison Hardy:

More was happy to talk about modular because I was brought up in my first school in the Cotswolds on Fisher Technic.

Neil Wright:

Yeah, it's that same premise, revived now, but for the more, yeah, more sexy kit.

Alison Hardy:

Fisher Technic. Mike Ashburn, my first head department, would say that Fisher Technic was pretty sexy. But yeah, it's moved on. But it is a cost, a cost thing as well, and it also is you take takes away from that idea that we have to be making something to take home which then becomes unsustainable. That's a whole other conversation. But anyway, thanks very much for today's conversation. Yeah, that's been absolutely great. Thank you, it's been good to meet you. Likewise,

Teacher's Experience at PATT-40 Conference
Comparative Judgment for Assessing and Ranking
Using Technology for Assessment and Research
Master's Research and Creativity in Teaching