Tech companies and university administrators get excited from time to time about the value of software that purports to evaluate student writing. This article does a great job explaining exactly what it is that writing teachers do when they respond to student writing. (We’re doing a lot more than looking for misplaced commas.)
The past few weeks brought yet another declaration of a computer program able to grade writing. More recently, the National Council of Teachers of English published a research-based explanation of why machine scoring falls short. How computers grade (most successfully only with short, well-circumscribed tasks) is well-documented, and I’ve written a short analysis of their aspirations and shortcomings.
But what goes into professional writing teachers’ responses to student writing? Notice that I’ve chosen the term “respond,” which certainly includes grading: how good is this text on some scale of measure? “Respond” is a bigger term, though: what ideas and reactions does this writing create? How might its author improve similar writing in the future? It’s one thing to say whether your writing is any good; it’s quite another to explain to you helpfully why.
Any piece of writing is good or bad within at least five dimensions:
- how well it fits a given readership or audience;
- how well it achieves a given purpose;
- how much ambition it displays;
- how well it conforms to matters of fact and reasoning; and
- how well it matches formal conventions expected by its audience.
These dimensions intersect, and teachers have to solve a cat’s cradle of their interactions to discern quality. — Washington Post