Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Lei, K., &
Mong, C. (2007). Using peer feedback to enhance the quality of student online postings:
An exploratory study. Journal of Computer-Mediated Communication, 12, 412-433.
International Communication Association.
In this case study, which focused on peer feedback, authors Ertmer et al. (2007) considered the effect of peer feedback on the quality of students’ postings. The proposal was that peer feedback would positively affect the depth of responses and discussion replies to posted questions for the 15 graduate students in an online technology integration course. Using Bloom’s taxonomy as a guide for evaluating the responses, peers gave scores to each other, which were part of the grade the students would receive. The effect was suggested to be threefold: faster response time, greater appreciation for peer feedback, and higher-order cognitive involvement as they responded to peers. Through various surveys and interviews, data was collected and evaluated for trends. Unfortunately, the results were not significant. There was not an increase in the quality of responses given by students, though the quality did not decrease. Through the open-ended survey questions, it became clear that students did value what their peers said, though they consistently valued the professor responses more. Students did not like having to give a point value to their peers since grades were depending on these evaluations. Overall, the students felt that as they were evaluating their peers, the process served to challenge them cognitively and be of benefit to their own analytical skills.
A thorough approach, this case study began with a valuable and clearly-written literature review that established that student discussion was extremely valuable in the learning process. Peer feedback supported that goal, but had not been studied in the online setting sufficiently. The study included many protocols to ensure quality of research such as a variety of surveys and interviews, instructor review of peer evaluations, training on the scoring rubric, knowledge of Bloom’s taxonomy, modeling of quality responses, anonymity of responses from peers, and precaution to ensure that scores were not influenced by the timing of the posts. As mentioned, though, results were not statistically significant to advance the research questions, though the interview responses did support the value of peer feedback overall.
While it is established in the cited references that feedback is valuable to students, it is not specified that peer feedback in included in that cycle. No studies were cited that offer statistical support for the use of peer feedback in any setting. There were no studies cited for using peer feedback positively in online courses, either. As a K-12 teacher in Language Arts, I believe I understand how that can be. Peer feedback in a high school writing course, for example, is not the same as teacher feedback on that level. While some young writers are more skilled and can offer some quality feedback, they cannot offer consistent, quality feedback to all the students in their classes–that is the job of the teacher. Students distrust peer feedback for good reason. The logic and thinking skills of peers may or may not be sufficient to offer quality feedback. The editing preferences or understanding of one student may be sketchy at best, so how can he or she correct a peer? That said, I think there would be a way to study the effect of peer feedback in higher education courses, but I would let peer feedback be feedback and not grading!