Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...
How students discuss content in courses continues to be a concern. Whether the exchange occurs the classroom, in a group, or online, most of us have heard students making assertions, never mentioning evidence, feeling free to comment when they are unprepared, and mostly agreeing with what everyone else says. True, few students are familiar with academic discourse, but most courses make it clear that they don’t learn to discuss by osmosis, by dealing with rigorous content, or by hearing how faculty handle intellectual dialogue. So, how do we ratchet up the caliber of student discussions?
A recent empirical analysis of interactions in 30 lab groups plus data collected on 91 individuals in those groups offers some hints (Paine & Knight, 2020). It also confirms discussion quality concerns. The students worked together to complete four assignments that involved experiments and data from their undergraduate biology course. Researchers used a previously developed Exchange of Quality Reasoning scale to determine the amount of reasoning in the groups’ recorded discussions of each assignment. Researchers heard no evidence of reasoning in 72 percent of the discussions, while only 5 percent demonstrated top-level reasoning. And this is just one of several disturbing findings in this elaborately designed study.
Of value to a discussion improvement agenda are several measures these researchers developed to analyze the discussions. One of these identifies 15 different kinds of comments made in the discussions. Most of the comments contributed to the discussion, some more than others, and a few had negative effects. Here are some examples from the list. Student comments
The most common comments these students made analyzed or were off topic. Least commonly, they made comments that disagreed and drove the discussion, both considered essential in good discussions.
The complete list, which includes definitions and examples (see Table 6 in the article), lays out the nature of comments made during discussions. Reading it, I wondered: If you asked students to name the kinds of comments they’d heard, how many would they be able to list? If you gave them a set of actual comments, could they identify their defining features? How many of our students have actually been challenged to observe the nature of discourse in a discussion?
Another research question explored whether these behaviors coalesced into roles, with the student consistently performing one or some combination of them. The researchers used individual data to identify ten possible roles (named and defined in Table 9). The four most common roles included analyst, reasoner, generalist, and minimalist. Generalists engaged in multiple behaviors, and minimalists spoke only one or twice during a discussion. Least common were the knowledge facilitators, who tried to teach or explain content to others, and the drivers, who provided leadership by asking questions and keeping the group on track. Although students showed some preference for a particular role, taking the same one in 50 percent of the discussions, no evidence showed that role choices resulted from conscious decision-making. Students assumed the role of analyst way more often than that of reasoner.
Here’s another consideration for improving discussions: Do students know that people take roles in discussion and that they usually opt for them without conscious awareness? Taking on roles not being filled in a discussion can help to focus and enrich the exchange, and the roles identified in this research describe behaviors well within the reach of most students. They can ask questions in groups. They can try to clarify confusing points or attempt to explain something to someone who doesn’t understand. But none of the roles can be purposely filled without an awareness of them.
This is a particularly impressive piece of research—well worth taking a look at (and it’s open access). It offers a detailed picture of what happened in a collection of discussions that should have been characterized by reasoning and problem-solving. What didn’t happen in those interactions offers another clear picture—this one showing where our instruction needs to focus.
Paine, A. R., & Knight, J. K. (2020). Student behaviors and interaction influence group discussion in an introductory biology lab setting. CBE—Life Sciences Education, 19(4). https://doi.org/10.1187/cbe.20-03-0054