New Test for Computers – Grading Essays at College Level

Imagine writing an essay for a college, and, instead of sparking personal feedback from an expert who spends five or ten minutes per page writing personalized reactions and tips for improvement, your work was never actually read by a human being who could recognize, appreciate, and encourage your accomplishments. Imagine that your essay was instead scored by a software program.

Imagine a higher-education culture that increases class sizes and gives its own graduate students fewer opportunities to apprentice themselves in the classroom as teaching assistants, relying instead on free essay-scoring software.

Imagine a software company that dismisses sound educational practice as expensive and inconvenient.

Imagine a reporter for a top-quality news organizing trivializing a complex issue with lead that could have come straight from that software company’s PR team.

Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the “send” button when you are done and receiving a grade back instantly, your essay scored by a software program. —NYTimes.com.

I can imagine something like this being a moderately useful self-check tool, but only if students get to determine how the computer weighs each component (so they can see just how artificial the rubric is), and only for certain assignments. For example, I can imagine a software program scanning to see whether. Student has submitted a news article that uses dialogue labels such as “claimed” or “explained” instead of the more neutral “said.” I can imagine a journalism tool that flags news articles with no direct quotations, or too-long paragraphs, or a high percentage of opinion words outside quotation marks.

But as an an English teacher, I read student essays for reasons other than checking keyword clusters.

One-on-one oral exams are a lot more comprehensive and less prone to cheating than essays or multiple-choice tests, but they are time-consuming. I do require face-to-face consultations in various classes, but they are part of the writing process, and thus part of the instructional process, not part of the testing culture.

If I asked students to speak into a microphone, but used a computer to analyze their vocal patterns to detect levels of confidence and key term clusters, I could not in any honest sense represent my instructional method as involving oral presentations.

An instructor who designs an assessment that includes a writing prompt, but who plans to assess the student’s work by computer –with no human reader — is probably better off using a series of fill-in-the blank and multiple-choice questions.

16 thoughts on “New Test for Computers – Grading Essays at College Level

    • Agreed. Years ago at a “serious games” conference, I noticed that educators and software creators at one session were butting heads because the techies were using the concepts “testing” and “assessing” and “teaching” almost interchangeably, but the educators seemed to think the techies were just being snarky or evasive. I finally said “testing is a subset of assessment,” which led to a brief discussion of formative vs summative feedback.

Leave a Reply to Dennis G. Jerz Cancel reply

Your email address will not be published. Required fields are marked *