Don’t Assume Difficult Questions Automatically Lead to Higher-Order Thinking

They’re the kind of questions that promote thinking and result in sophisticated intellectual development. They’re the kind of questions teachers aspire to ask students, but, according to research, these types of questions aren’t the typical ones found on most course exams. Part of the disconnect between these aspirations and the actualities results from the difficulty of writing questions that test higher-order thinking skills.

They are hard to write because teachers aren’t always clear in their minds as to what makes a question higher order. Bloom’s venerable taxonomy is where most thinking about question types begins and ends. At the bottom of Bloom are the lower-order questions—ones that involve a fact or detail and the ones that faculty do understand. Lower-order questions ask for definitions, usually a regurgitation of ones given by the teacher or that appear in the text. They are questions that ask students to show that they understand the material—that they can illustrate it. In other words, they test knowledge and comprehension, not necessarily thinking.

Higher-order questions ask students to use information, methods, concepts, or theories in new situations. They may ask students to predict consequences, to break problems into parts, to identify critical components of a new problem, to recognize patterns and organization within a collection of parts, to discriminate among ideas, and to make choices based on reasoned arguments. These and other extrapolations of Bloom’s higher categories of application, analysis, and evaluation were used in the study referenced below.

The authors of the study are writing about biologists and higher-order thinking questions in that discipline, so while their examples are specific to that field, their study of higher-order questions and the issues raised by their findings are relevant to any faculty member who wants students to leave courses doing more higher-order thinking. What motivated this study is something relevant to many faculty members: “… biologists still struggle to create good HOCS [higher-order cognitive skills] questions.” (p. 48)

Their study design was unusual. It analyzed (via a variety of qualitative methods described in the article) the conversations of two groups of biologists. One group was writing 40 higher-order questions that would be used in a study of the impact of higher-order clicker case studies compared with lower-order clicker case studies in introductory biology courses. These were clicker case studies that combined an engaging story and scientific content with multiple-choice clicker questions. The second group of biologists rated the clicker questions written by the first group with the goal of determining which were higher-order questions.

Analysis of the conversation revealed that both groups, those writing the questions and those rating the questions, regularly referred to the Bloom taxonomy using the extrapolations illustrated above. They would compare what the question was asking students to do with the collection of higher-order extrapolations. But other issues emerged as criteria these groups were using to determine if a question actually tested higher-order thinking, and this is where the results get interesting and perplexing. For example, the words “difficult,” “challenging,” or “easy” were regularly used to describe why a particular question was or wasn’t higher order.

Even though those writing the questions recognized that not all difficult questions were higher-order questions, they worried that when their questions were tested on students and too many got the question right, it was an “easy” question and therefore might not be higher order. The researchers write, “Even though the Instructor Team [those writing the questions] was regularly stating that cognitive level and difficulty may be two different things, they made a practice of judging the cognitive level of a question based on difficulty.” (p. 52) It’s an assumption that probably colors most faculty thinking: hard questions are higher order and easy ones are not.

Beyond the difficulty of the question, analysis of these conversations revealed that faculty were also concerned with the time required to answer a question. The assumption made about time was fairly simple and probably not unique to this biology group either. “Their general ideas were that students could answer lower-order questions quickly and that higher-order questions were time-consuming to answer.” (p. 52) This idea about the time required to answer a question was also related to question difficulty—in addition to being harder, they also took more time to answer. Further analysis revealed that these faculty also thought that questions were more difficult and required more time to answer if the question type was unfamiliar to students. If the question form was familiar and therefore easy for students to answer, chances were it wasn’t a higher-order question. Finally, beyond Bloom was an issue of whether questions should have one correct answer or several possible correct options with one of those being the better answer. The debate here is probably one more likely to be had in fields like biology than in fields like philosophy. “Most of the participants felt that questions with multiple reasonable solutions were not higher-order, even though they themselves used higher-order cognitive skills to solve them.” (p. 55)

The researchers express concern about the assumptions made by these two faculty groups. Are higher-order questions always more difficult for students to answer? They cite research on the topic that is “equivocal” and recommend that the two not be conflated. (p. 56) They conclude with this implication, relevant to anyone trying to write questions that test students’ higher-order thinking skills: “If asked to write or evaluate higher-order questions, they [biologists] do not use Bloom’s Taxonomy in a vacuum. Rather, they bring to the task their own conceptions and beliefs about higher-order questioning. Some of these conceptions are misguided, and further training could help correct these misconceptions.” (p. 56)

This is an interesting analysis that can be used to clarify thinking about higher-order questions and reveal assumptions about them that merit discussing and possibly challenging.

Reference: Lemons, P.P. and Lemons, J.D. (2013). Questions for assessing higher-order cognitive skills: It’s not just Bloom’s. Cell Biology Education­-Life Sciences Education, 12 (Spring), 47-58.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
They're the kind of questions that promote thinking and result in sophisticated intellectual development. They're the kind of questions teachers aspire to ask students, but, according to research, these types of questions aren't the typical ones found on most course exams. Part of the disconnect between these aspirations and the actualities results from the difficulty of writing questions that test higher-order thinking skills. They are hard to write because teachers aren't always clear in their minds as to what makes a question higher order. Bloom's venerable taxonomy is where most thinking about question types begins and ends. At the bottom of Bloom are the lower-order questions—ones that involve a fact or detail and the ones that faculty do understand. Lower-order questions ask for definitions, usually a regurgitation of ones given by the teacher or that appear in the text. They are questions that ask students to show that they understand the material—that they can illustrate it. In other words, they test knowledge and comprehension, not necessarily thinking. Higher-order questions ask students to use information, methods, concepts, or theories in new situations. They may ask students to predict consequences, to break problems into parts, to identify critical components of a new problem, to recognize patterns and organization within a collection of parts, to discriminate among ideas, and to make choices based on reasoned arguments. These and other extrapolations of Bloom's higher categories of application, analysis, and evaluation were used in the study referenced below. The authors of the study are writing about biologists and higher-order thinking questions in that discipline, so while their examples are specific to that field, their study of higher-order questions and the issues raised by their findings are relevant to any faculty member who wants students to leave courses doing more higher-order thinking. What motivated this study is something relevant to many faculty members: “... biologists still struggle to create good HOCS [higher-order cognitive skills] questions.” (p. 48) Their study design was unusual. It analyzed (via a variety of qualitative methods described in the article) the conversations of two groups of biologists. One group was writing 40 higher-order questions that would be used in a study of the impact of higher-order clicker case studies compared with lower-order clicker case studies in introductory biology courses. These were clicker case studies that combined an engaging story and scientific content with multiple-choice clicker questions. The second group of biologists rated the clicker questions written by the first group with the goal of determining which were higher-order questions. Analysis of the conversation revealed that both groups, those writing the questions and those rating the questions, regularly referred to the Bloom taxonomy using the extrapolations illustrated above. They would compare what the question was asking students to do with the collection of higher-order extrapolations. But other issues emerged as criteria these groups were using to determine if a question actually tested higher-order thinking, and this is where the results get interesting and perplexing. For example, the words “difficult,” “challenging,” or “easy” were regularly used to describe why a particular question was or wasn't higher order. Even though those writing the questions recognized that not all difficult questions were higher-order questions, they worried that when their questions were tested on students and too many got the question right, it was an “easy” question and therefore might not be higher order. The researchers write, “Even though the Instructor Team [those writing the questions] was regularly stating that cognitive level and difficulty may be two different things, they made a practice of judging the cognitive level of a question based on difficulty.” (p. 52) It's an assumption that probably colors most faculty thinking: hard questions are higher order and easy ones are not. Beyond the difficulty of the question, analysis of these conversations revealed that faculty were also concerned with the time required to answer a question. The assumption made about time was fairly simple and probably not unique to this biology group either. “Their general ideas were that students could answer lower-order questions quickly and that higher-order questions were time-consuming to answer.” (p. 52) This idea about the time required to answer a question was also related to question difficulty—in addition to being harder, they also took more time to answer. Further analysis revealed that these faculty also thought that questions were more difficult and required more time to answer if the question type was unfamiliar to students. If the question form was familiar and therefore easy for students to answer, chances were it wasn't a higher-order question. Finally, beyond Bloom was an issue of whether questions should have one correct answer or several possible correct options with one of those being the better answer. The debate here is probably one more likely to be had in fields like biology than in fields like philosophy. “Most of the participants felt that questions with multiple reasonable solutions were not higher-order, even though they themselves used higher-order cognitive skills to solve them.” (p. 55) The researchers express concern about the assumptions made by these two faculty groups. Are higher-order questions always more difficult for students to answer? They cite research on the topic that is “equivocal” and recommend that the two not be conflated. (p. 56) They conclude with this implication, relevant to anyone trying to write questions that test students' higher-order thinking skills: “If asked to write or evaluate higher-order questions, they [biologists] do not use Bloom's Taxonomy in a vacuum. Rather, they bring to the task their own conceptions and beliefs about higher-order questioning. Some of these conceptions are misguided, and further training could help correct these misconceptions.” (p. 56) This is an interesting analysis that can be used to clarify thinking about higher-order questions and reveal assumptions about them that merit discussing and possibly challenging. Reference: Lemons, P.P. and Lemons, J.D. (2013). Questions for assessing higher-order cognitive skills: It's not just Bloom's. Cell Biology Education­-Life Sciences Education, 12 (Spring), 47-58.