Grading writing: The art and science — and why computers can’t do it

Tech companies and university administrators get excited from time to time about the value of software that purports to evaluate student writing. This article does a great job explaining exactly what it is that writing teachers do when they respond to student writing. (We’re doing a lot more than looking for misplaced commas.)

The past few weeks brought yet another declaration of a computer program able to grade writing.  More recently, the National Council of Teachers of English published a research-based explanation of why machine scoring falls short.  How computers grade (most successfully only with short, well-circumscribed tasks) is well-documented, and I’ve written a short analysis of their aspirations and shortcomings.

But what goes into professional writing teachers’ responses to student writing?  Notice that I’ve chosen the term “respond,” which certainly includes grading: how good is this text on some scale of measure? “Respond” is a bigger term, though: what ideas and reactions does this writing create?  How might its author improve similar writing in the future?  It’s one thing to say whether your writing is any good; it’s quite another to explain to you helpfully why.

Any piece of writing is good or bad within at least five dimensions:

  • how well it fits a given readership or audience;
  • how well it achieves a given purpose;
  • how much ambition it displays;
  • how well it conforms to matters of fact and reasoning; and
  • how well it matches formal conventions expected by its audience.

These dimensions intersect, and teachers have to solve a cat’s cradle of their interactions to discern quality. — Washington Post

5 thoughts on “Grading writing: The art and science — and why computers can’t do it

  1. “Despite all this complexity, grading per se is reasonably easy for experienced teachers. They can confidently, even quickly, judge whether a given paper is an A or C.” And this is exactly why teachers whose primary subject is not English are in need of extensive training, rubrics, and samples, to learn how to grade writing assignments within their majors and programs. Why even after teaching my writing in math course, it needs _a lot_ more revision. And my rubrics still are insufficient :)

    • You once very helpfully explained for me the difference between mathematics and arithmetic, which I leaned is kind of similar to the difference between rhetoric and grammar. Having had little exposure to theoretucal mathematics (beyond tesseract episode in Cosmos and the Heinlin story & Flatland), I could only imagine advanced mathematics as an extension of arithmetic.

      While there are many ways to be “right” in an essay, some ways of phrasing things are more persuasive, more engaging, more pithy than others. Likewise, sometimes a student with great ideas can struggle with accurate expression, or a student who writes meticulously correct sentences but unimaginative and overly “safe” ideas needs the right kind of encouragement.

      My semesters are very end-loaded in writing classes. It is very easy to create a writing prompt, but evaluating student writing takes a long time. Students in my Lit Crit class are submitting drafts of 20-page papers. Fortunately the students at this level won’t need much help with grammar. But the rubric has to be very general, since there are so many “right” ways to do literary criticism.

Leave a Reply to David Richard Sykut Cancel reply

Your email address will not be published. Required fields are marked *