How To Crowdsource Grading

Top-down grading by the prof  turns learning (which should be a deep
pleasure, setting up for a lifetime of curiosity) into a crass
competition:  how do I snag the highest grade for the least amount of
work? how do I give the prof what she wants so I can get the A that I
need for med school?  That’s the opposite of learning and curiosity,
the opposite of everything I believe as a teacher, and is, quite
frankly, a waste of my time and the students’ time. There has to be a
better way . . .

So, this year, when I teach “This Is Your Brain on the Internet,”
I’m trying out a new point system supplemented, first, by peer review
and by my own constant commentary (written and oral) on student
progress, goals, ambitions, and contributions.   Grading itself will be
by contract:   Do all the work (and there is a lot of work), and you
get an A.   Don’t need an A?  Don’t have time to do all the work?  No
problem.  You can aim for and earn a B. There will be a chart.  You do
the assignment satisfactorily, you get the points.  Add up
the points, there’s your grade.  Clearcut.  No guesswork.  No
second-guessing ‘what the prof wants.’ No gaming the system. 
Clearcut.  Student is responsible. 

But what determines meeting the standard required in this point
system?  What does it mean to do work “satisfactorily”?  And how to
judge quality, you ask?  Crowdsourcing. —Cathy Davidson, HASTAC