Tuesday, December 09, 2014

Some Thoughts on Examination

If someone is learning how to play an instrument or how to draw, there is a straightforward way of testing them. You give them an object (some sheet music or a model) and ask them to represent it (to play it or draw it). The result may not be the most artistically interesting performance, but it will demonstrate a level of skill under the circumstances. You put something in front of them that you expect them to be able to represent (through a performance of the ability you've been trying to teach them) and then you watch them do it. Sending them home, and then letting them return with a finished drawing or a recording after, say, a week, would sort of miss the point. We now have to trust that it was in fact the student who produced the representation. And we wouldn't know under exactly what conditions it was produced in any case. There are too many ways of cheating if the process is kept out of sight.

I've been thinking about how this model might be applied in more bookish subjects. Wouldn't it be possible to examine the students' mastery of a sociological theory, or a historical period, or a literary corpus, by sitting them down in front of a computer for four hours with the task of writing, say, 8 individual paragraphs 27 minutes at a time? Or perhaps give them only 5 paragraphs to write. The first half hour is spent planning out their essay. They then submit one paragraph every half hour. Finally, they are given an hour to revise all five. They can be graded on both the individual paragraphs and the full composition, each of which shows something in particular.

By limiting the resources they can bring with them to the exam (a small set of paper books for example) it would be very easy to detect patchwriting and plagiarism. Their essays could be automatically run through a plagiarism checker comparing them against exactly the books they were allowed to bring with them. This would allow us to make an important concession to proponents of patchwriting: it would now be possible to stop treating it as a "crime". Even plagiarism could be treated simply as a poor scholarship. If you submit five paragraphs that are simply transcribed from the books you were allowed to bring with you, you don't get kicked out of school but you do get an F. Just as a pianist would if she didn't play the piece she had been assigned but openly played a CD of Glenn Gould's performance instead.

If this set-up were implemented, there would be absolutely no ambiguity about what they had learned to do during the semester. And it would be obvious to the students what they have to become good at. Now, you can give them all kinds of more "interesting" assignments throughout the year, and you can give them as much feedback on them as you like, including an indication of the sort grade they might receive ... if it counted. But this will work best if you don't let course work during the term contribute to the grade. It's just practice, training. You tell them how well they're doing, but you only, finally, judge their performance at the end.

Let's construct an easy example. Imagine a one-semester course on three of Shakespeare's tragedies: Macbeth, Hamlet, and Othello. The students are to bring the text of each play, and the collection of essays (perhaps a casebook) that was assigned in the course. At the start of the exam they are given a recognisable question, perhaps not quite as familiar (from the lectures) as "Why didn't Hamlet kill Claudius immediately?" but something like that—a question that reveals ignorance if its relevance is not immediately apparent to them. It's the sort of question that after a semester of Shakespearean tragedy they should have a good answer to. Not something they're supposed to be able to come up with an answer to, but actually have one for going into the examination. For each play, in the context of its particular set of interpretations (in the casebook), there will many different possible questions. The trick is that they don't know exactly what will be asked of them, nor of which play. All they can do to prepare is to understand the play and its interpretations. And

they can get their prose in shape. They know they will need to quickly and efficiently (in thirty minutes) plan out a five-paragraph essay. They will then have to compose five paragraphs in a row, a half hour at a time. (I've discussed the technical issues with the IT department at my university and it would be a simple matter to set up a computerised exam like this.) Then they'd have an hour to polish it. Students who are capable of a such a performance have acquired not just valuable knowledge about Shakespeare's tragedies, but also a set of writing skills that will serve them (if they keep them in shape) for the rest of their lives.

And such assignments would be easy to grade. You would be able to determine at a glance what the students are capable of, and how well they understand the play. As, Bs, Cs, and Ds would be very easy to assign. Fs would result from radically incomplete or ignorant attempts, or, like I say, plagiarism. In four hours a student would have been able to provide a completely unambiguous demonstration of their understanding of the course material. And given only a few minutes per assignment (time could be saved by grading one of out of the five paragraphs at random + the whole composition), a teacher would not only be able to painlessly complete the grading, but also get a good sense of how effective they are as teachers.

I'd love to hear what readers of this blog think of this idea. I really think this is how we should do things.


Jonathan said...

This is how we do our M.A. exam, except with no books or notes allowed. We have a laptop without internet access. We have a time limit but no set number of paragraphs. It is extremely difficult. The student must tell us what sh/e knows about a given subject.

Thomas said...

When in the program does this exam happen? It's strikes me as an excellent way to address fundamentals.

Jonathan said...

It's at the end of the first two years, or in March, really of second year. Frankly, it works well in motivating students to read the list. The results are rarely very good, even among passing students. We need to make them responsible for fewer texts, and know those better.

Thomas said...

That's interesting. Why shouldn't the results be good? You're probably right that fewer texts would help. Also, perhaps, a shorter assignment (do you have any length requirements?) so that you can demand more polished prose.

Jonathan said...

There is a disconnect between course work, geared toward in-depth analysis, and the exam, which asks what you know and can recall from memory about a huge number of potential texts.

Thomas said...

I'm thinking about how this would work in music. A piano player has to demonstrate a pretty deep understanding of the piece to be played. So, in a sense, the playing would demonstrate an "in-depth analysis" even though this analysis would not be made explicit.

Isn't there a way of doing this with writing? The student is given a relatively superficial task, the performance of which would demonstrate (or fail to demonstrate) an understanding that could only have come from close study.

Jonathan said...

It would be more like: sight=read this Chopin (when you have been practicing mostly Mozart and Schubert). And, the catch is, you have to sight read it without looking at the score, with your vague memories of the text.

Thomas said...

But it would possible to do something that looks more like: play the Mozart you've been practicing all semester. Like I say, the students could bring their own copy of the three plays (not knowing which one they will have to demonstrate knowledge of). The music students brings their own copy of the score, which will have various notes and markings (fingering) on it.

MDH said...

As a method of assessing students' knowledge of or mastery of content, there is much to recommend this practice. As an assessment someone might implement, I have doubts about the validity of responses.

Obviously all assessments "test" both students' content knowledge and their facility with that particular testing instrument. For common instruments or modalities, we recognize that some students may have a particular difficulty with them but, at least, they've got lots of experience with them so the instrument effects in their responses kinda-sorta comes out in the wash. The modality you're proposing, if more widely adopted or somehow already familiar to the students would be great. If not, you're testing on a whole slew of new modality anxieties. People who obsess over proofreading. People unsure about their spelling or word choice. People with mild or undiagnosed language difficulties who have perfectly good ideas and can speak them (but not write them easily) or who can write them (but not speak them). People who are hyperperfectionists who would like to find THE thread that links all the plays together in some brilliant piece of analysis and spends 30 minutes flailing before coming to her senses and ratcheting down the ambition.

All of those are common, less-than-optimal, things that students do that they should work on. Things that should with time and effort be improved. Things that as an instructor you may want to identify so that, maybe, assistance can be offered. On the other hand, maybe not. Capturing some of those foibles (and not other ones) may not have business in the assessment you actually care about making. I'm not saying anything this audience doesn't already know, but I'm quite cautious about this and other synthetic time-pressure instruments.

Thomas said...

That's a great point, MDH. It's true that if this sort of exam were implemented you'd have to spend a great deal of time "teaching to the test", and in this case it would actually be justified, since learning how to pass the test would require building real prose muscle, critical thinking, intellectual agility, etc. The students would have "practice".