Discovering Metrics for Evaluating an Academic Weblog Community [First Draft of CCCC 2005 Proposal]

Speaker #n will present a statistical analysis of the activity on a student weblog community in order to identify possible correlations which may advance our understanding of the pedagogical value of weblogs. The community is a group of personal blogs, all hosted on the same server and sponsored by the university. Activity on the site will be monitored during the summer break, during which all students will have the ability to continue posting to their blogs.

While most members of this particular community are undergraduates who are required to blog for course credit, but the server does not, at the moment, host any “class blogs”. Those students who blog for credit do so on their own personal blogs, where they are given free reign to blog on whatever they wish, in addition to their academic blogging. A small number of faculty and students who are unconnected to the classes where blogging is required nevertheless keep blogs on a voluntary basis. About 5% of the bloggers in the group are responsible for about 50% of the activity on the site, and the voluntary bloggers are well-represented in this list of active users. Preliminary analysis of the ratio between number of posts (top-level entries created by registered bloggers) and comments (brief responses, which can be added to the main entry by any web visitor, including random web surfers) reveals several interesting details: male bloggers wrote less frequently than the female bloggers, but typically attracted more comments per post.

Other areas to examine include the relationship between the blogroll (a sidebar containing a list of a blogger’s favorite weblogs) and the classroom seating chart, and the usual computer-assisted textual analysis subjects such as word count, word frequency, and average word length. In order to present this information, an analysis of the peculiar ethics of this particular research situation may prove illuminating. All students who blog for class are informed of the inevitably public nature of their work, which makes the invention of pseudonyms almost pointless (since Google would easily help the curious audience member identify the “real” author of any quoted passage). Information such as average number of posts per month, or average number of comments attracted by each post, is already public (even though only the weblog administrator has push-button access to an up-to-the minute master list). Other factors which may be examined for possible associations include the degree to which the student personalizes the blog templates (leaving it “plain vanilla,” modifying it in simple or complex ways), the average number of links per post, and the average number of inbound, on-site, and off-site links per post.
Discovering Metrics for Evaluating an Academic Weblog Community [First Draft of CCCC 2005 Proposal]Jerz’s Literacy Weblog)

First draft of my component of a panel proposal for next year’s CCCC (I am “speaker #n”.)

I was resisting putting in buzzwords such as “emergence” and “network,” since I think of this as a practical exploration of just what it is possible to learn once we learn to read all the data that’s being encoded in the networks the students form when they link to each other and post comments on each other’s blogs.

I can’t really come up with a cuter humanities-style title for the paper, not until I’ve actually got some results to work with.

I was thinking of “Mene, mene, tekel, uparshin,” if only to remind me to look for signs that weblogs aren’t the heaven-sent answer to every single thing that might possibly be less than perfect in academia.

4 thoughts on “Discovering Metrics for Evaluating an Academic Weblog Community [First Draft of CCCC 2005 Proposal]

  1. Chris, thanks for the feedback… I’m at a very small schoool, and since I’m not teaching two sections of the same course, I can’t, for example, ask one class to blog and another class to write the traditional way, so that cuts down on the amount of one-to-one comparison I can do. Any thoughts in that area?

  2. I was thinking of the number crunching aspect, and wondered whether T-units (minimum terminable units: the shortest meaningful, complete units of an “utterance”) would be at all useful or meaningful to look at, especially diachronically. It’s something that has long been done in the ESL writing end of things, though I have my doubts about what, precisely, it signals. Even if it’s forced blogging, what might longer linguistic constructions suggest about, if nothing else, students’ comfort with the interface and its impact on their writing?
    It’d be interesting (though well outside the scope of what you’re looking at) to look at a comparison between average T-unit length in digital and analog writing spheres. Good luck with this, Dennis. Sounds fascinating.

  3. Thanks for your feedback, Mike. I don’t believe that our sample is large enough or typical enough to represent trends in academic blogging across the board, but we’ve got to start somewhere. I’m painfully aware that, as the teacher, my preferences can directly affect the data. For instance, if I think students should put more links in their blogs, I just create an assignment that involves linking from their blogs, and presto, the number of links from student blogs rises. I have never indicated that I would grade student blogs according to the number of comments attracted, but almost immediately students started evaluting the “success” of their individual blog entries by counting the number of comments it attracted. Students are only required to make four or six comments on peer blogs (each time they turn in their blog portfolios, I simply say, “Include two comments you made on a peer blog.” The vast majority of students make far more comments than that; a comment from a peer is still a “gift” in the economy of the SHU blogosphere, even though the entry to which the comment was made may have been “forced blogging”. As for terms such as “frequent” and other measurements such as length of posts, I’ll simply have to crunch the numbers and see what the mean and median are (which means I’ll need a statistics refresher…).You’re right, Mike, this is real-time anthropology; to do my teaching well I have to be part of the culture, but those bloggers who are really taking to blogging are finding value in it that I didn’t expect them to find. I plan on introducing blogs to my freshan comp class, where conventional wisdom suggests that students might be less enamored of writing than the average English major.I’m glad you found the proposal interesting… thanks for the encouragement. Deadliine’s a week from Monday, so I have a bit of time to tweak it.

  4. Fascinating stuff. Compelling and really interesting (and yet also problematic — you’re doing what every paranoid internet user fears observing and tracking their web habits! (I understand of course that you’ll be sensitive to privacy, and that all this is no different than what a webmaster might do for a corporate website…but still, I’d be running some ideas past the ‘human research’ committee first, just to be safe). Regardless, this could be very, VERY, important work! I especially liked the connection you drew between blog commenting and the seating chart. You’re becoming a sort of cyber-anthropologist in all this work, as much as a scholar of technology in the humanities.
    Questions that popped into my head as I read your post: Does it make a difference that this is just the first year of blogging on campus? Do we have “enough” bloggers to justify any claims about trends? Since you’re the teacher of all these classes that use blogging, how might that skew the empirical statistics at work here? How much of the blogging is assigned and how much isn’t (can you separate that when looking at numbers?) How might the students themselves skew the stats, now that you’ve publically announced that you’re tracking trends over the summer? You write “Male bloggers wrote less frequently than the female bloggers, but typically attracted more comments per post” — I immediately wondered what gender population constituted the commenting. How frequent is “frequent”? How much of this is determined by the amount of women in the major vs. other majors? How much of this is determined by still somewhat skewed population of women over men on our campus, compared to national averages?
    Great food for thought. I predict good things with this paper.

Leave a Reply to Mike Arnzen Cancel reply

Your email address will not be published. Required fields are marked *