Imagine writing an essay for a college, and, instead of sparking personal feedback from an expert who spends five or ten minutes per page writing personalized reactions and tips for improvement, your work was never actually read by a human being who could recognize, appreciate, and encourage your accomplishments. Imagine that your essay was instead scored by a software program.
Imagine a higher-education culture that increases class sizes and gives its own graduate students fewer opportunities to apprentice themselves in the classroom as teaching assistants, relying instead on free essay-scoring software.
Imagine a software company that dismisses sound educational practice as expensive and inconvenient.
Imagine a reporter for a top-quality news organizing trivializing a complex issue with lead that could have come straight from that software company’s PR team.
Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the “send” button when you are done and receiving a grade back instantly, your essay scored by a software program. —NYTimes.com.
I can imagine something like this being a moderately useful self-check tool, but only if students get to determine how the computer weighs each component (so they can see just how artificial the rubric is), and only for certain assignments. For example, I can imagine a software program scanning to see whether. Student has submitted a news article that uses dialogue labels such as “claimed” or “explained” instead of the more neutral “said.” I can imagine a journalism tool that flags news articles with no direct quotations, or too-long paragraphs, or a high percentage of opinion words outside quotation marks.
But as an an English teacher, I read student essays for reasons other than checking keyword clusters.
One-on-one oral exams are a lot more comprehensive and less prone to cheating than essays or multiple-choice tests, but they are time-consuming. I do require face-to-face consultations in various classes, but they are part of the writing process, and thus part of the instructional process, not part of the testing culture.
If I asked students to speak into a microphone, but used a computer to analyze their vocal patterns to detect levels of confidence and key term clusters, I could not in any honest sense represent my instructional method as involving oral presentations.
An instructor who designs an assessment that includes a writing prompt, but who plans to assess the student’s work by computer –with no human reader — is probably better off using a series of fill-in-the blank and multiple-choice questions.
New Test for Computers – Grading Essays at College Level http://t.co/JyfSUpPLDv
Emily Wierszewski liked this on Facebook.
To me, this is SO sad. On the horizon with #PARCC. New Test for Computers – Grading Essays at College Level http://t.co/PU4VD2zqXQ via @zite
New Test for Computers – Grading Essays at College Level http://t.co/MnhS1LN6ii
At my school, yes, one third of the English department has stooped this low. It’s horrible. A co-worker described my work as “the trials of Hercules” trying to get our kids prepared for college because other members of the English department are doing all writing assessment via computer programs. No formative assessment whatsoever. No true, honest feedback given.
By comparison, I refuse to use a program like Edmodo, even if it does have some value, because it has become an embarrassing misuse of technology. This is even more troubling because the biggest user of said programs is the English I/English II teacher. Students are walking into Eng III/IV with – zero – writing skills. Of course, that leads to other problems, but that’s a whole different discussion.
I make different rubrics for every writing assignment I give. Every single one is different. Grammar, mechanics, usage, style, etc., are all areas to be assessed, but those don’t tell the whole story. Many of my students make cogent, salient points, but the grammar is spotty. Fundamentally, I will not punish a student (at least not overly harshly) for a previous teacher’s failure to make corrections and constructively guide the student toward improvement. Ergo, I regularly have students coming to me in English IV (senior level English), at honors level, who cannot write a sentence.
There’s a reason I’m actually happy to live alone and have nothing going for me other than my career. I am in the position that I can dedicate insane amounts of time to getting our students college ready.
RT @DennisJerz: Imagine a reporter trivializing a complex issue with a lead that seems pulled from a marketing campaign.
http://t.co/rIA …
Yeah, that’s completely ridiculous to me, as I was just an English student (…oh, and I’m a teacher), but it’s entirely possible it’s symptomatic of a larger problem. If student’s levels of achievement have dropped so significantly that stringing together grammatically correct sentences is enough for an “A,” then fine, give the task of grading to a sophisticated version of MS Office. Maybe Clippy can post colorful, encouraging comments like “Good job! You used the correct “there” (smiley face).”
While it might make grading easier, I sincerely hope we haven’t dropped to that level yet. I can’t imagine a computer program trying to comprehend my clever analogies or razor-sharp wit.
There is a possible compromise though. Perhaps such software could be used to weed out the really bad papers. If grammar and punctuation errors alone make it a D paper, should the instructor really care about reading it? Clearly the student didn’t care about writing it.
Kevin McGinnis liked this on Facebook.
David Richard Sykut liked this on Facebook.
Keisha Che’re Jimmerson liked this on Facebook.
Appreciate it, sir.
Kevin, feel free to share.
But I think this opens a larger discussion. “Grading” comes in a variety of forms. Probably most of those are not directed toward learning.
Agreed. Years ago at a “serious games” conference, I noticed that educators and software creators at one session were butting heads because the techies were using the concepts “testing” and “assessing” and “teaching” almost interchangeably, but the educators seemed to think the techies were just being snarky or evasive. I finally said “testing is a subset of assessment,” which led to a brief discussion of formative vs summative feedback.
You get to the nub of this issue–this is a new tool that can help students learn. It isn’t really grading.
Do you mind if I share this with some colleagues? This is fantastic and puts into words what I have not been able to say or quantify at my school. The rise, use, and dependency on programs like Edmodo has deeply impacted instruction, especially writing.