Over the Christmas break, I’ve been churning through Khan Academy math drills, so that I can be a more effective homeschool parent.
It’s actually kind of fun watching my score go up, and earning badges. In the way that birds who are trained to peck buttons for food think it’s fun to peck their little beaks bloody.
As a grad student ploughing through Kant and Derrida, I found it an intellectual relief to run a computer program through a compiler and see a list of exactly what character on what line of my program triggered an error (divide by zero? missing semicolon? stack heap error? undeclared variable? illegal declaration?)
While computers are getting better at recognizing grammar errors, there is no software that can provide appropriate feedback to the student who, after weeks of submitting grammatically correct filler, suddenly delivers a ragged paragraph that roughs out a brilliant personal intellectual breakthrough.
I’m conscious that I cannot award “energy points” to reward the accumulation of raw knowledge that leads to expertise in writing or literature classes, yet I regularly teach science and business majors who expect immediate, quantifiable feedback at every stage of their drafts. I do provide a proxy of sorts — asking students to do things like “Demonstrate that you can supply a quotation from a source to defend an argument that you wish to refute,’ or “Propose a title that not only identifies your topic, but also states the position you intend to defend.”
Of course, a computer can’t grade those assignments automatically. Instead, my students have to wait for me to read their submissions and assign a grade. If all I am doing is assigning a number grade, I can breeze through a set of exercises in 20 minutes, awarding points based on whether the student has or has not met the specific criteria, but I am not awake, sitting at my computer at 2 in the morning when the student submits the paragraph, and as such my students cannot spend several hours churning out 12 different paragraphs, making incremental changes and resubmitting for instant, no-penalty feedback, and then running with whatever iteration yields the best short-term progress — although I confess I have solved many a technical problem (in Blender 3D, Inform 7, Scratch, Twine, InDesign, iMovie, or Khan Academy) doing just that.
Offering Instant feedback that quantifies the consequences of routine and trivial choices is a great way to help students understand cause and effect. For example, Khan Academy features some really great interactive tools that help students visualize the difference between the median and the mean, or why multiplying times a fraction is the same as dividing by the inverse of the fraction.
It is already common to take these features — the immediate quantification of routine and trivial actions — an apply them to human behavior in different contexts. For example, there’s already an industry that helps people translate their daily activity — number of steps taken, number of miles run — into data that can be presented, shared, and compared.
A few days ago, I came across a video on the YouTube channel Extra Credit — a critique of Sesame Credit, the Chinese government’s new social media tool.
If you post pictures of Tiananmen Square or share a link about the recent stock market collapse, your Sesame Credit goes down… Share a link from the state-sponsored news agency about how good the economy is doing and your score goes up… If you’re making purchases the state deems valuable, like work shoes or local agricultural products your score goes up… If you import anime from Japan though, down the score goes.
https://www.youtube.com/watch?v=lHcTKWiZ8sI
Back in 2011, Ian Bogost said it pretty well: “Gamification is Bullshit,“, and suggested we describe this kind of software as “expliotationware“. His 2010 game “Cow Clicker” demonstrated how easy it was for the right combination of stimuli to get thousands of human beings to click a picture of a cow.
Last semester, I had a few students struggle more than usual with paper topics. One student kept changing topics whenever she couldn’t find scholarship that supported the exact argument she wanted to make; I had to tell her that if someone had already written a whole article that advanced the exact argument my student wanted to advance, then there would be no point in writing the paper (unless, of course, you had something further to contribute — but how would you know, unless you read up on the current debates in the subfield you’ve chosen to explore?
When a student in an upper-level literature class submitted an iffy paper proposal, my feedback was something like, “What scholarly evidence can you find to support this claim?” I was surprised, a few weeks later, to learn that, instead of trying to find such sources, or asking me for help finding sources, the student had interpreted my comment as “shooting down” the topic, and picked a different one — but once again made the proposal without demonstrating that it was possible to find credible sources that had explored the topic. The truth is that finding the credible sources to respond to was part of the intellectual challenge of the writing task; even if I looked up the student’s proposed topic, and I found sources that I thought would be useful to me if I were interested in writing on that topic, that doesn’t mean the sources I found on my own would help the student develop whatever argument he or she found attractive.
A point-driven system that awards points to those who can supply the “correct” answer seems innocuous and wonderful when it’s deployed in mathematics. My kids use apps that help them memorize spelling, geography, and anatomy, but only a subset of human knowledge and behavior fits into a feedback system driven by multiple-choice and fill-in-the-break answers.