Evaluating Online Discussions

Discussions in class and online are not the same. When a comment is keyed in, more time can be involved in deciding what will be said. Online comments have more permanence. They can be read more than once and responded to more specifically. Online commentary isn’t delivered orally and evokes fewer of the fears associated with speaking in public. These features begin the list of what makes online discussions different. These different features also have implications for how online exchanges are assessed. What evaluation criteria are appropriate?

Two researchers offer data helpful in answering the assessment question. They decided to take a look at a collection of rubrics being used to assess online discussions. They analyzed 50 rubrics they found online by using various search engines and keywords. All the rubrics in this sample were developed to assess online discussions in higher education, and they did so with 153 different performance criteria. Based on a keyword analysis, the researchers grouped this collection into four major categories. Each is briefly discussed here.

Cognitive criteria—Forty-four percent of the criteria were assigned to this category, which loosely represented the caliber of the intellectual thinking displayed by the student in the online exchange. Many of the criteria emphasized critical thinking, problem solving and argumentation, knowledge construction, creative thinking, and course content and readings. Many also attempted to assess the extent to which the thinking was deep and not superficial. Others looked at the student’s ability “to apply, explain and interpret information; to use inferences; provide conclusions; and suggest solutions.” (p. 812)

Mechanical criteria—Almost 20 percent of the criteria were assigned to this category. These criteria essentially assessed the student’s writing ability, including use of language, grammatical and spelling correctness, organization, writing style, and the use of references and citations. “Ratings that stress clarity … benefit other learners by allowing them to concentrate on the message rather than spend their time trying to decipher unclear messages.” (p. 813) However, the authors worry that the emphasis on the mechanical aspects of language may detract from the student’s ability to contribute in-depth analysis and reflection. They note the need for more research about the impact of this group of assessment criteria.

Procedural/managerial criteria—The criteria in this group focused on the students’ contributions and conduct in the online exchange environment. Almost 19 percent of the criteria belonged to this category. More specifically these criteria dealt with the frequency of and timeliness of the postings. Others assessed the degree of respect and the extent to which students adhered to specified rules of conduct.

Interactive criteria—About 18 percent of the criteria were placed in this category, and they assessed the degree to which students reacted to and interacted with each other. Were students responding to what others said, answering the questions of others, and asking others questions? Were they providing feedback? Were they using the contributions of others in their comments?

This work is not prescriptive. It does not propose which criteria are right or best. However, it does give teachers a good sense of those aspects of online interaction that are most regularly being assessed, which can be helpful in creating or revising a set of assessment criteria. Beyond what others are using, a teacher’s decision should be guided by the goals and objectives of an online discussion activity. What does the teacher aspire for students to know and to be able to do as a result of interacting with others in an online exchange?

Reference:

Penny, L. and Murphy, E. (2009). Rubrics for designing and evaluating online asynchronous discussions. British Journal of Educational Technology, 40 (5), 804-820.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Discussions in class and online are not the same. When a comment is keyed in, more time can be involved in deciding what will be said. Online comments have more permanence. They can be read more than once and responded to more specifically. Online commentary isn't delivered orally and evokes fewer of the fears associated with speaking in public. These features begin the list of what makes online discussions different. These different features also have implications for how online exchanges are assessed. What evaluation criteria are appropriate? Two researchers offer data helpful in answering the assessment question. They decided to take a look at a collection of rubrics being used to assess online discussions. They analyzed 50 rubrics they found online by using various search engines and keywords. All the rubrics in this sample were developed to assess online discussions in higher education, and they did so with 153 different performance criteria. Based on a keyword analysis, the researchers grouped this collection into four major categories. Each is briefly discussed here. Cognitive criteria—Forty-four percent of the criteria were assigned to this category, which loosely represented the caliber of the intellectual thinking displayed by the student in the online exchange. Many of the criteria emphasized critical thinking, problem solving and argumentation, knowledge construction, creative thinking, and course content and readings. Many also attempted to assess the extent to which the thinking was deep and not superficial. Others looked at the student's ability “to apply, explain and interpret information; to use inferences; provide conclusions; and suggest solutions.” (p. 812) Mechanical criteria—Almost 20 percent of the criteria were assigned to this category. These criteria essentially assessed the student's writing ability, including use of language, grammatical and spelling correctness, organization, writing style, and the use of references and citations. “Ratings that stress clarity ... benefit other learners by allowing them to concentrate on the message rather than spend their time trying to decipher unclear messages.” (p. 813) However, the authors worry that the emphasis on the mechanical aspects of language may detract from the student's ability to contribute in-depth analysis and reflection. They note the need for more research about the impact of this group of assessment criteria. Procedural/managerial criteria—The criteria in this group focused on the students' contributions and conduct in the online exchange environment. Almost 19 percent of the criteria belonged to this category. More specifically these criteria dealt with the frequency of and timeliness of the postings. Others assessed the degree of respect and the extent to which students adhered to specified rules of conduct. Interactive criteria—About 18 percent of the criteria were placed in this category, and they assessed the degree to which students reacted to and interacted with each other. Were students responding to what others said, answering the questions of others, and asking others questions? Were they providing feedback? Were they using the contributions of others in their comments? This work is not prescriptive. It does not propose which criteria are right or best. However, it does give teachers a good sense of those aspects of online interaction that are most regularly being assessed, which can be helpful in creating or revising a set of assessment criteria. Beyond what others are using, a teacher's decision should be guided by the goals and objectives of an online discussion activity. What does the teacher aspire for students to know and to be able to do as a result of interacting with others in an online exchange? Reference: Penny, L. and Murphy, E. (2009). Rubrics for designing and evaluating online asynchronous discussions. British Journal of Educational Technology, 40 (5), 804-820.