Pre/Views - On Computer Grading of Student Writing: What Can Be Counted, and What Actually Counts
By Vicki Tolar Burton, Director, Writing Intensive Curriculum
On Computer Grading of Student Writing: What Can Be Counted, and What Actually Counts
The OSU Faculty Senate recently engaged in a surprisingly lively discussion of student writing in the Baccalaureate Core. At the November meeting of the Faculty Senate, former Baccalaureate Core co-chairs Kerry Kincanon and Marion Rossi presented recommendations from the Bacc Core Committee (BCC) regarding writing in the Baccalaureate Core. These recommendations were drawn from the 2012 report Review of Writing in the Baccalaureate Core.
An engaged discussion of writing followed Kincanon and Rossi’s presentation. One senator suggested that OSU move toward computerized grading of student writing. The presiding co-chairs asked me to respond to this suggestion, and I was glad to do so. Computer grading of student writing is a hot-button topic nationally, and I was glad to share with the senators what grading software can and cannot do. While computer programs are very efficient for grading Scantron examinations, they are limited in their ability to deal with prose texts, especially anything much longer than a paragraph.
Here are some things a computer program can tell you about a student’s paper: the length/number of words, the percentage of words over three syllables, the average number of words per sentence, whether there is variety in sentence length, and whether it has a title. As we know from MS Word, a computer program can also scan for grammar and punctuation errors.
Here are a few things a computer program cannot assess:
- Whether the paper addresses the assignment
- Whether ideas are fully developed
- Whether the paper has accurate content
- Whether the purpose of the writing is clear
- Whether the paper addresses the appropriate audience
As it turns out, these attributes are the qualities valued most highly by OSU faculty in the fall WIC faculty seminar, based on an exercise conducted on November 12.
The assessment of these highly-valued qualities requires a human reader with knowledge of the content area, the assignment, and the rhetorical situation for the piece of writing. It also requires familiarity with the discourse of the field or discipline.
This human reader might be a teacher, and for the final evaluation of student writing in a WIC course, it must be the teacher. But feedback on the crucial elements might also be given by peers. Most of our students, especially those raised in Oregon, have been giving each other feedback on their writing since they were in elementary school. As these experienced responders move into advanced university courses in their major and grow in content knowledge, they might also grow as reviewers, being trained to raise the intellectual level and precision of their feedback.
So my suggestion to faculty senators considering machine grading of university level writing based on use of long words and sentence variety is that instead we improve student writing by turning to our best resource: students. Let’s teach them to give quality feedback on writing in their discipline and by doing so help improve the quality of writing in our majors. If this project interests your unit, WIC can help. Email us here.