Clicker Questions: Does It Matter What Kind?

teaching with technology

The use of clickers, especially in large classes, has made participation a reality for a lot more students. It’s a safe way to offer an answer and an equally constructive way to find out whether yours is the correct answer. Research on clickers and learning regularly documents their positive effects on exam scores. Now the research enterprise is moving to explore more specific questions, such as whether the clicker influence is more significant for some questions than for others. For example, here’s a study that looked at the effects of factual and conceptual clicker questions on exam performance.

The research team had three hypotheses: (1) they expected to replicate previous findings that factual and conceptual clicker questions improve exam performance, especially the factual questions; (2) they anticipated that conceptual questions would improve performance on conceptual exam questions; and (3) they predicted that prior knowledge and their approach to learning would mediate the effects of clicker questions on learning.

They chose to study the effects of clicker questions in an actual classroom. In this case it was a biology course, taught didactically. The instructor lectured and used PowerPoint slides. The research team used four different conditions to test their hypotheses. They had students answer factual clicker questions; they had students answer conceptual clicker questions; they used enhanced control, in which the instructor didn’t use a clicker question but verbally identified important content to know for the exams; and they used a simple control condition with no clicker question and no designation of the content as important.

The experiment ran across four semesters, which allowed researchers to assign each exam question to a different experimental condition. This permitted them to look at the “effects of the four conditions without the confounding variable of item differences affecting the results” (p. 48).

And what did the results show? They confirmed the first hypothesis. The factual clicker questions that students answered in class, for which they saw how everyone else answered and were given the correct answer, and which was discussed if less than 90 percent of the group got it correct, resulted in a statistically significant increase in correct factual question answers on the exams. Interestingly, and a bit surprisingly, the second hypothesis was not confirmed. Conceptual clicker questions used in the same way as the factual questions did not improve performance over the simple condition in which the instructor did nothing.

The third hypothesis was partially supported: “We found that clicker questions brought the overall exam performance of students who did not employ deep learning strategies to the level of their deep strategy-using peers” (p. 54). The use of clicker questions did not affect the other student variables studied: students’ metacognitive self-regulation, active learning, shallow learning strategies and motivation, GPA, or prior knowledge.

A second study further explored the effects of clicker questions—this time in a physics course with an instructor who used a problem-oriented pedagogy. They predicted the same positive effect on factual questions and that “stronger, more knowledgeable students would score differently from their less well prepared counterparts, in response to the clicker intervention” (p. 54). The basic design of the study remained the same. However, in this case, the factual clicker question benefit on factual exam questions was not realized, nor was there any benefit on the conceptual questions. The researchers concluded, “The present study replicates many prior reports of clicker use, which demonstrated that the technology is effective for supporting factual knowledge retention in lecture-based classrooms, but also demonstrated that the effect does not always generalize to courses employing active learning strategies” (p. 56).

There was another finding of note in the second study: “Students in the problem-oriented course with little or no prior knowledge of the material suffered more from the negative effects of the factual clicker questions and enhanced control condition on the conceptual exam questions” (p. 56). The researchers wondered whether calling attention to the importance of content with either a clicker question or the instructor’s identification of it as important content caused these students to focus on the content but only in superficial ways. They were motivated to memorize it because it was important to do so, but they memorized without understanding it. For all students in the study, the researchers think their testing of an “enhanced condition” where the instructor has called attention to the importance of the content “suggests that the reason behind clicker effects, or at least a part of the reason, is that they may alert students to important information and thus lead students to focus more on that information, either in class or during study” (p. 56).

This is good work. It moves our understanding of clicker effects forward and shows that they don’t just work, but work differentially depending on the type of clicker question, the instructional approach, and a collection of variables related to students.

Reference:

Shapiro, A. M., Sims-Knight, J., O’Rielly, G. V., Capaldo, P., Pedlow, T., Gordon, L., & Monteiro, K. (2017). Clicker can promote fact retention but impede conceptual understanding: The effect of the interaction between clicker use and pedagogy on learning. Computers & Education, 111, 44–59.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
The use of clickers, especially in large classes, has made participation a reality for a lot more students. It's a safe way to offer an answer and an equally constructive way to find out whether yours is the correct answer. Research on clickers and learning regularly documents their positive effects on exam scores. Now the research enterprise is moving to explore more specific questions, such as whether the clicker influence is more significant for some questions than for others. For example, here's a study that looked at the effects of factual and conceptual clicker questions on exam performance. The research team had three hypotheses: (1) they expected to replicate previous findings that factual and conceptual clicker questions improve exam performance, especially the factual questions; (2) they anticipated that conceptual questions would improve performance on conceptual exam questions; and (3) they predicted that prior knowledge and their approach to learning would mediate the effects of clicker questions on learning. They chose to study the effects of clicker questions in an actual classroom. In this case it was a biology course, taught didactically. The instructor lectured and used PowerPoint slides. The research team used four different conditions to test their hypotheses. They had students answer factual clicker questions; they had students answer conceptual clicker questions; they used enhanced control, in which the instructor didn't use a clicker question but verbally identified important content to know for the exams; and they used a simple control condition with no clicker question and no designation of the content as important. The experiment ran across four semesters, which allowed researchers to assign each exam question to a different experimental condition. This permitted them to look at the “effects of the four conditions without the confounding variable of item differences affecting the results” (p. 48). And what did the results show? They confirmed the first hypothesis. The factual clicker questions that students answered in class, for which they saw how everyone else answered and were given the correct answer, and which was discussed if less than 90 percent of the group got it correct, resulted in a statistically significant increase in correct factual question answers on the exams. Interestingly, and a bit surprisingly, the second hypothesis was not confirmed. Conceptual clicker questions used in the same way as the factual questions did not improve performance over the simple condition in which the instructor did nothing. The third hypothesis was partially supported: “We found that clicker questions brought the overall exam performance of students who did not employ deep learning strategies to the level of their deep strategy-using peers” (p. 54). The use of clicker questions did not affect the other student variables studied: students' metacognitive self-regulation, active learning, shallow learning strategies and motivation, GPA, or prior knowledge. A second study further explored the effects of clicker questions—this time in a physics course with an instructor who used a problem-oriented pedagogy. They predicted the same positive effect on factual questions and that “stronger, more knowledgeable students would score differently from their less well prepared counterparts, in response to the clicker intervention” (p. 54). The basic design of the study remained the same. However, in this case, the factual clicker question benefit on factual exam questions was not realized, nor was there any benefit on the conceptual questions. The researchers concluded, “The present study replicates many prior reports of clicker use, which demonstrated that the technology is effective for supporting factual knowledge retention in lecture-based classrooms, but also demonstrated that the effect does not always generalize to courses employing active learning strategies” (p. 56). There was another finding of note in the second study: “Students in the problem-oriented course with little or no prior knowledge of the material suffered more from the negative effects of the factual clicker questions and enhanced control condition on the conceptual exam questions” (p. 56). The researchers wondered whether calling attention to the importance of content with either a clicker question or the instructor's identification of it as important content caused these students to focus on the content but only in superficial ways. They were motivated to memorize it because it was important to do so, but they memorized without understanding it. For all students in the study, the researchers think their testing of an “enhanced condition” where the instructor has called attention to the importance of the content “suggests that the reason behind clicker effects, or at least a part of the reason, is that they may alert students to important information and thus lead students to focus more on that information, either in class or during study” (p. 56). This is good work. It moves our understanding of clicker effects forward and shows that they don't just work, but work differentially depending on the type of clicker question, the instructional approach, and a collection of variables related to students. Reference: Shapiro, A. M., Sims-Knight, J., O'Rielly, G. V., Capaldo, P., Pedlow, T., Gordon, L., & Monteiro, K. (2017). Clicker can promote fact retention but impede conceptual understanding: The effect of the interaction between clicker use and pedagogy on learning. Computers & Education, 111, 44–59.