April 11, 2009, by Teaching at Nottingham
Chris Rust on assessment and feedback
Video >>
Chris Rust: “You don’t need to go to the literature and spend much time to find how important assessment is considered regarding learning and teaching in higher education. It probably took me five minutes to find these quotes, and there are many more saying the same thing. It is at the centre of the student experience. It is the curriculum, as far as students are concerned. We know, as the final quote suggests, that it’s almost certainly one of the single most powerful influences on the approaches our students take to learning.
“So, assessment, crucially important, a crucial role in the learning process. That’s very clear from the literature, but you again don’t need to travel very far or, you know, go too many Times Highers without finding criticisms of our practice.
“If you work your way back through QAA subject reviews, the single biggest criticism over and over again is around assessment and assessment practices, the speed of feedback, the quality of feedback, etc. And that, as has already been mentioned, has more recently been mirrored in the National Student Survey.
“But if you are working in your departments to try and create assessment strategies, if you do want to try and look at your practice, I suggest this may be a useful checklist.
“Firstly, it seems to me, we should be trying to ensure we’ve got constructive alignment, Biggs’ phrase. I’m sure many of you know what that means, but just in case you don’t, this is not a hugely complex idea. What Biggs means by constructive alignment is that, when you design your course, you should have clear learning outcomes. You should then make sure that your teaching methods clearly, explicitly, logically, are the best possible ways that you can think of to take your students to those outcomes, and your assessment, the key for this morning, your assessment should focus on whether the outcomes have been achieved or not. This does not seem, to me, to be rocket science.
“So ensuring it addresses that issue of validity I brought up earlier, ensuring our assessment processes truly assess the learning outcomes of the course.
“Clearly, we should have explicit assessment criteria. From the literature around deep and surface approaches to learning, comes this argument, that if you want your students to take a deep approach to their learning, you should try and avoid, quotes, ‘Threatening and anxiety provoking assessment strategies.’
“We need, I would suggest, to make sure we are engendering, as much as we can, intrinsic motivation, and it seems to me that links to ideas about relevance and purpose. Trying to give the students activities, as part of the assessment, which they can see why they are doing. ‘Why would I ever need to know this? Why would I ever do this? When I have left university, what possible use is this to me?’ And maybe, where possible, giving students choices. We know people are more motivated if they can do the things that they, themselves, have chosen.
“We need to pace student learning, I would suggest. In particular, this is even more important, it seems to me, with the widening participation agenda.
“And linked to that, we need to structure their skills development, to make sure that our assessments, in particular in the first year, are identifying the skills we want to develop, clearly helping support their development, and assessing them, and giving them feedback on it.
“And Mantz Yorke’s phrase, again linked to widening participation in particular, we need to find ways of allowing for slow learning and early failure.
“If we believe that, whatever we’re teaching students need to be engaged, they need to actively engage, they need to construct understanding for themselves, we don’t believe in transmission models, and so on, if that’s what we believe about learning in general, why do we think trying to help students understand assessment, and assessment criteria, should be any different? Because up to this point, we’ve been working with trying to make criteria ever more explicit; we’ve been trying to help students understand criteria. So we suddenly said, ‘Well surely, we need to take a social constructivist approach to this, just like we were teaching anything else.’ So we said, ‘Okay, what would a social constructivist view of the assessment process be? What would that look like?’
“So what we said was, ‘It’s no good just having explicit criteria.’ We’d already moved to this point from some of our work. ‘What we need to do, is we need to get the students actively to engage with the criteria. It’s no good just giving them the criteria and expecting them to understand these words, like analysis and evaluation.’
“One way that we found worked very effectively in our Business School, of getting the students marking pieces of work, using criteria. And we’ve shown very clearly that those students, through that process, learn what those criteria mean, and go on to produce better work as a result.
“We’ve said repeatedly this morning, how important feedback is, and in that list from Gibbs & Simpson, there’s the whole issue about, how do you get students to engage with feedback? If we’re going to make feedback work, not only do we need to make sure it’s prompt, and written in a way they can understand, etc, we need to get them to engage with it.
“Tom Angelo, I thought this was absolutely brilliant, he said, ‘Well, if you want feedback to work, there’s one golden rule.’ And he said ‘This is exactly the same as for your murder detective,’
“So firstly, you’ve got to give them a motive. You’ve got to give them a reason. ‘Why would I want to read this feedback? Why do I believe this is going to be any help to me? Am I ever going to have to do this thing again?’
“And I suggest this leads you to the notion that we should do far more work around first drafts, second drafts, rewriting, redoing exercises, than we currently do, but you need to give them some motivation for addressing it.
“Secondly, you’ve got to give them an opportunity. ‘When am I going to have a chance to put this into practice?’ And finally, you’ve got to give them the means. It’s no good telling them their analysis isn’t good enough if we don’t help them understand, ‘What would good analysis look like?’ What do they need to do to make it good?
“So I’ll stop there, and I think that is exactly 50 minutes, and I’ll leave ten minutes for questions.”
Nuala Byrne: “I think Karl do you have one over there?”
Karl (Audience): “Yeah, you rightly criticised earlier of measuring lots of different things and then adding them all together and we have the, sort of, numbers. And obviously there are problems with that, but at the end of the day you’ve got to be able to assume a degree of clarification. You can’t, you know [inaudible].
“So how do you go around that problem?”
Chris: “I’m not going to claim there are easy answers, but I think, firstly, I think we could focus far more on fewer, larger, more integrative assessments, which could come, if you’re on semesters, at the end of a semester, and it could come at the end of the year. So the notion of year, yearly assessments which look at your programme learning outcomes as opposed to the, sort of atomisation of outcomes down into very small things at module level, which are all getting added together and marked, and so on. So I think I would firstly advocate linking assessment more to programme outcomes, possibly on a yearly basis. I’m talking about summative assessment now, and I think that that would more truly fit the constructive alignment model of looking at what do we think a graduate from our course should be able to do? And let’s go for those fairly big things, rather than the atomised assessments.”
Karl: “Okay, but I mean, if you take that model of having a single summative assessment at the end, isn’t that a problem in terms of making it, you know, threatening and…?”
Chris: “Yeah, I think there’s a tension there, and I think you’re right to spot it. I’m certainly not advocating that you leave it all to finals at the end of year three, and certainly, it seems to me, you would need good, formative assessment going on along the way, so that students are relatively confident about their abilities and the fact that they will be able to do those assessments that count. And so you’re back to, sort of, support and scaffolding and things.”
Chris Rust
External Consultant
Chris was the invited plenary speaker at the University’s Twelfth Learning & Teaching conference (January, 2008).
This video was originally published as part of PESL’s Teaching at Nottingham collection.
No comments yet, fill out a comment to be the first
Leave a Reply