The Questions to Ask about Research on Teaching and Learning

What does it mean. Questions about research.

Faculty have access to more information about college teaching than ever before. Researchers have studied a host of instructional approaches and published results in myriad journals. Educators have shared summaries of and links to such studies informally on websites and through Twitter feeds. This is good news for those of us who want to learn more about a particular instructional method or technique before we try it in our own courses.

Not all of us are educational researchers, however, and that brings some challenges for making sense of these studies. The research questions and methods may not be as familiar to us as those in our home disciplines. Unfamiliar research approaches can make it challenging to determine how much stock to place in study findings. This challenge increases when different studies report mixed or contradictory results.

How can we assess the research quality? How can we determine whether a given instructional method is something worth trying in our courses? What follows is a set of questions that you can use when evaluating individual studies and collections of them. The goal of these questions is to help you glean information from research to consider whether or how to implement a pedagogical approach.

Is the research question one you want to know the answer to? Education researchers ask and answer questions that may or may not have practical application for our teaching. For example, although hundreds of researchers have asked whether active learning is superior to lecture, some of us are not particularly interested the answer. We don’t see lecture and active learning as an either/or proposition and instead believe that we can use both approaches together. What we might want to know instead are the combinations or particular features of lectures or active learning methods that make them more or less effective.

And some researchers have found answers to such questions. For example, Balch (2012) discovered that following a lecture with a demonstration and debrief rather than another lecture improved student performance on a follow-up test. Streveler and Menekse (2017) examined different forms and features of active learning. They found that methods that involve students interacting with others and engage them in creating content are related to greater gains in student learning than methods that don’t. When first assessing an article, then, consider whether it is relevant to your situation and whether it will tell you something that you want to know.

Are the methods sound? Common sense is critical when reading research, and a particularly good time to apply it is when evaluating a study’s methods. The researchers should have described the participants in some detail, including the sample size, the method for selecting and recruiting participants, and the relationship between the researcher and the participants. They also should have provided ample description of the method of data collection, including the data collection protocols or instruments and time given for the data collection, as well as information about data management, storage, and analysis. Finally, the researchers should have described efforts to ensure the quality of the research design and the study findings (whether validity and reliability, trustworthiness, or other). If researchers haven’t explained information like this, then they haven’t given you reason to trust their results.

Are the findings research-based or evidence-based? These wildly bandied about terms are often used interchangeably even though they actually represent different approaches to educational research. The term “research-based teaching,” refers to teaching methods documented as effective through any kind of research on instruction, including qualitative (e.g. grounded theory, phenomenology, ethnography) or exploratory quantitative (e.g. correlational and survey research) methods. Such investigations are effective when the researcher wants to know how faculty or students feel about an instructional method or to identify potentially important features/variables. These methods do not determine a casual relationship, however, and the good ones don’t claim to do so. Be wary of any descriptive or correlational study that claims causation, using words such as “influence” or “impact.” On the other hand, the term “evidence-based teaching” signifies instructional methods documented through experimental studies. In these studies, researchers have examined if a causal relationship exists between variables, or in the case of pedagogical research, whether the instructional intervention can be tied directly to improvements in student learning outcomes. Researchers who use these methods have put into place appropriate controls and designs in order to identify a causal relationship.

The differences between these two research approaches matters when gauging a study for its implications for teaching. For example, let’s consider a recent study about problem-based learning, PBL, as it’s usually called. Henry et al (2012) examined student perceptions of the method and found that they experienced several challenges, including comfort with the course structure, the changed role of the instructor/facilitator, and their own new roles. The study is important because it gives voice to the student experience, and this can guide thinking about teaching with this method. The study does not, nor does it claim to, show that PBL influences student learning outcomes, positively or negatively. On the other hand, dozens of evidence-based experimental designs show promising effects of the method on student learning, particularly in the development of clinical skills (see for example Günter & Alpat, 2017). These don’t necessarily highlight potential challenges or benefits of the method, but they provide useful information about outcomes.

When reading an article, consider what kind of research it is as you weigh whether or how to use the findings to inform your teaching decisions. If you want to know what factors students find important, research-based reports are appropriate. If you want to know whether a given instructional method might be the cause of an increase in student learning outcomes, seek out experimental or quasi-experimental studies that document a learning change that is the result of the instructional intervention. Often a combination of study types can be the most illuminating.

Are the findings significant? Many times there’s a difference between what a researcher considers significant and what an educator wants to know when it comes to results related to college teaching. Researchers, particularly statisticians, using the term significant typically mean that the result was not attributed to chance. An educator looking for a significant result usually wants to know if the findings were important or meaningful. For example, in the findings from Arum and Roska’s Academically Adrift, the researchers found no statically significant evidence of improvements in undergraduate student critical thinking skills among the study participants. Some educators took significance in this sense to mean importance, rather than results not being attributable to chance, which led to an overbroad interpretation of the findings that were not necessarily asserted by the researchers.

Could the finding be a one-off? When reading educational research, context is critical. Much of the existing research has been done at single institutions, in single courses, with a small sample of students, often with the instructor also serving as the researcher. These issues don’t necessarily constitute fatal flaws, but they raise questions about generalizability. Even if the researchers found statistically significant improvements to learning, you may not want to upend your teaching based on the results of a single study conducted at a single institution.

Fortunately, it is possible to make meaning from single studies particularly when considering them together. Indeed, when the question asked in a single study has been asked by multiple researchers, many times that’s followed with a meta-study that synthesizes and combines results and finds an effect size. Such analyses can help to reduce some of the single study methodological limitations, allowing us more confidence in the findings. This is one reason the study by Freeman et al (2014) which combines results from 225 single studies has been so influential. Their work shows that student learning outcomes improve and failure rates decrease when instructors include active learning methods.

What do you do when results of multiple studies conflict with each other? In the absence of a meta-analytic synthesis, when multiple studies show conflicting findings, consider using a “weight of the evidence” approach in your decision-making process. When the studies are taken together, consider which side has more credible proof of results. For example, in a recent Faculty Focus post, Weimer (2017) described results from several different studies exploring whether quizzes improve college student learning. She notes that the results are mixed, but that more studies report a positive than negative association. The benefits of using quizzes included that students were more motivated and studied more, participated more in class, and got better grades. The weight of this evidence suggests that quizzes yield positive results, but the conditions under which quizzing was used in the studies were so different that anyone reviewing them in search of evidence to support a particular approach to quizzing would do well to consider the institutional type, level of students, the subject matter, the type of quizzes offered, the timing of quizzes, etc. described in the various studies.

Conclusion

No study is perfect, and a research finding, even from a rigorous and robust study that’s in a context similar to yours, is no guarantee that you’ll get the same results. Your institution is different. Your students are different. Your course is different, and you don’t teach in the exact same way as the researcher. It’s best to think of a study as a description of a practice that might work for you and your students.

Some pedagogical approaches are better than others, however, and a well-done study can inform your decision-making. Reading pedagogical research can be a challenging task, but the rewards are improved knowledge of teaching that helps us make changes that improve teaching practices and student learning. When read well and critically, pedagogical research can be a useful resource for improving college teaching.

References

Arum, R., & Roska, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: University of Chicago Press.

Balch, W. R. (2012). A free-recall demonstration versus a lecture-only control: Learning benefits. Teaching of Psychology, 39(1), 34-37.

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23). Retrieved from https://doi.org/10.1073/pnas.1319030111

Günter, T., Alpat, S.K. (2017). The effects of problem-based learning (PBL) on the academic achievement of students studying “electrochemistry.” Chemistry Education Research and Practice, 18(1), 78-98.

Henry, H., Tawfik, A., Jonassen, D., Winholtz, R., Khanna S. (2012). I know this is supposed to be more like the real world, but . . .”: Student perceptions of a PBL implementation in an undergraduate materials science course. Interdisciplinary Journal of Problem-Based Learning, 6(1): 43-81.

Strevler, R. A., & Meneske, M. (2017). Taking a closer look at the active learning. Journal of Engineering Education, 106(2), 186-190.

Weimer, M. (2017). Do quizzes improve student learning? A look at the evidence. https://qa.teachingprofessor.com/articles/teaching-professor-blog/quizzes-improve-student-learning/

Claire Major is a professor of higher education and chair of the Department of Educational Leadership, Policy, and Technology Studies at the University of Alabama. cmajor@ua.edu

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Faculty have access to more information about college teaching than ever before. Researchers have studied a host of instructional approaches and published results in myriad journals. Educators have shared summaries of and links to such studies informally on websites and through Twitter feeds. This is good news for those of us who want to learn more about a particular instructional method or technique before we try it in our own courses. Not all of us are educational researchers, however, and that brings some challenges for making sense of these studies. The research questions and methods may not be as familiar to us as those in our home disciplines. Unfamiliar research approaches can make it challenging to determine how much stock to place in study findings. This challenge increases when different studies report mixed or contradictory results. How can we assess the research quality? How can we determine whether a given instructional method is something worth trying in our courses? What follows is a set of questions that you can use when evaluating individual studies and collections of them. The goal of these questions is to help you glean information from research to consider whether or how to implement a pedagogical approach. Is the research question one you want to know the answer to? Education researchers ask and answer questions that may or may not have practical application for our teaching. For example, although hundreds of researchers have asked whether active learning is superior to lecture, some of us are not particularly interested the answer. We don’t see lecture and active learning as an either/or proposition and instead believe that we can use both approaches together. What we might want to know instead are the combinations or particular features of lectures or active learning methods that make them more or less effective. And some researchers have found answers to such questions. For example, Balch (2012) discovered that following a lecture with a demonstration and debrief rather than another lecture improved student performance on a follow-up test. Streveler and Menekse (2017) examined different forms and features of active learning. They found that methods that involve students interacting with others and engage them in creating content are related to greater gains in student learning than methods that don’t. When first assessing an article, then, consider whether it is relevant to your situation and whether it will tell you something that you want to know. Are the methods sound? Common sense is critical when reading research, and a particularly good time to apply it is when evaluating a study’s methods. The researchers should have described the participants in some detail, including the sample size, the method for selecting and recruiting participants, and the relationship between the researcher and the participants. They also should have provided ample description of the method of data collection, including the data collection protocols or instruments and time given for the data collection, as well as information about data management, storage, and analysis. Finally, the researchers should have described efforts to ensure the quality of the research design and the study findings (whether validity and reliability, trustworthiness, or other). If researchers haven’t explained information like this, then they haven’t given you reason to trust their results. Are the findings research-based or evidence-based? These wildly bandied about terms are often used interchangeably even though they actually represent different approaches to educational research. The term “research-based teaching,” refers to teaching methods documented as effective through any kind of research on instruction, including qualitative (e.g. grounded theory, phenomenology, ethnography) or exploratory quantitative (e.g. correlational and survey research) methods. Such investigations are effective when the researcher wants to know how faculty or students feel about an instructional method or to identify potentially important features/variables. These methods do not determine a casual relationship, however, and the good ones don’t claim to do so. Be wary of any descriptive or correlational study that claims causation, using words such as “influence” or “impact.” On the other hand, the term “evidence-based teaching” signifies instructional methods documented through experimental studies. In these studies, researchers have examined if a causal relationship exists between variables, or in the case of pedagogical research, whether the instructional intervention can be tied directly to improvements in student learning outcomes. Researchers who use these methods have put into place appropriate controls and designs in order to identify a causal relationship. The differences between these two research approaches matters when gauging a study for its implications for teaching. For example, let’s consider a recent study about problem-based learning, PBL, as it’s usually called. Henry et al (2012) examined student perceptions of the method and found that they experienced several challenges, including comfort with the course structure, the changed role of the instructor/facilitator, and their own new roles. The study is important because it gives voice to the student experience, and this can guide thinking about teaching with this method. The study does not, nor does it claim to, show that PBL influences student learning outcomes, positively or negatively. On the other hand, dozens of evidence-based experimental designs show promising effects of the method on student learning, particularly in the development of clinical skills (see for example Günter & Alpat, 2017). These don’t necessarily highlight potential challenges or benefits of the method, but they provide useful information about outcomes. When reading an article, consider what kind of research it is as you weigh whether or how to use the findings to inform your teaching decisions. If you want to know what factors students find important, research-based reports are appropriate. If you want to know whether a given instructional method might be the cause of an increase in student learning outcomes, seek out experimental or quasi-experimental studies that document a learning change that is the result of the instructional intervention. Often a combination of study types can be the most illuminating. Are the findings significant? Many times there’s a difference between what a researcher considers significant and what an educator wants to know when it comes to results related to college teaching. Researchers, particularly statisticians, using the term significant typically mean that the result was not attributed to chance. An educator looking for a significant result usually wants to know if the findings were important or meaningful. For example, in the findings from Arum and Roska’s Academically Adrift, the researchers found no statically significant evidence of improvements in undergraduate student critical thinking skills among the study participants. Some educators took significance in this sense to mean importance, rather than results not being attributable to chance, which led to an overbroad interpretation of the findings that were not necessarily asserted by the researchers. Could the finding be a one-off? When reading educational research, context is critical. Much of the existing research has been done at single institutions, in single courses, with a small sample of students, often with the instructor also serving as the researcher. These issues don’t necessarily constitute fatal flaws, but they raise questions about generalizability. Even if the researchers found statistically significant improvements to learning, you may not want to upend your teaching based on the results of a single study conducted at a single institution. Fortunately, it is possible to make meaning from single studies particularly when considering them together. Indeed, when the question asked in a single study has been asked by multiple researchers, many times that’s followed with a meta-study that synthesizes and combines results and finds an effect size. Such analyses can help to reduce some of the single study methodological limitations, allowing us more confidence in the findings. This is one reason the study by Freeman et al (2014) which combines results from 225 single studies has been so influential. Their work shows that student learning outcomes improve and failure rates decrease when instructors include active learning methods. What do you do when results of multiple studies conflict with each other? In the absence of a meta-analytic synthesis, when multiple studies show conflicting findings, consider using a “weight of the evidence” approach in your decision-making process. When the studies are taken together, consider which side has more credible proof of results. For example, in a recent Faculty Focus post, Weimer (2017) described results from several different studies exploring whether quizzes improve college student learning. She notes that the results are mixed, but that more studies report a positive than negative association. The benefits of using quizzes included that students were more motivated and studied more, participated more in class, and got better grades. The weight of this evidence suggests that quizzes yield positive results, but the conditions under which quizzing was used in the studies were so different that anyone reviewing them in search of evidence to support a particular approach to quizzing would do well to consider the institutional type, level of students, the subject matter, the type of quizzes offered, the timing of quizzes, etc. described in the various studies. Conclusion No study is perfect, and a research finding, even from a rigorous and robust study that’s in a context similar to yours, is no guarantee that you’ll get the same results. Your institution is different. Your students are different. Your course is different, and you don’t teach in the exact same way as the researcher. It’s best to think of a study as a description of a practice that might work for you and your students. Some pedagogical approaches are better than others, however, and a well-done study can inform your decision-making. Reading pedagogical research can be a challenging task, but the rewards are improved knowledge of teaching that helps us make changes that improve teaching practices and student learning. When read well and critically, pedagogical research can be a useful resource for improving college teaching. References Arum, R., & Roska, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: University of Chicago Press. Balch, W. R. (2012). A free-recall demonstration versus a lecture-only control: Learning benefits. Teaching of Psychology, 39(1), 34-37. Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23). Retrieved from https://doi.org/10.1073/pnas.1319030111 Günter, T., Alpat, S.K. (2017). The effects of problem-based learning (PBL) on the academic achievement of students studying "electrochemistry." Chemistry Education Research and Practice, 18(1), 78-98. Henry, H., Tawfik, A., Jonassen, D., Winholtz, R., Khanna S. (2012). I know this is supposed to be more like the real world, but . . .": Student perceptions of a PBL implementation in an undergraduate materials science course. Interdisciplinary Journal of Problem-Based Learning, 6(1): 43-81. Strevler, R. A., & Meneske, M. (2017). Taking a closer look at the active learning. Journal of Engineering Education, 106(2), 186-190. Weimer, M. (2017). Do quizzes improve student learning? A look at the evidence. https://qa.teachingprofessor.com/articles/teaching-professor-blog/quizzes-improve-student-learning/ Claire Major is a professor of higher education and chair of the Department of Educational Leadership, Policy, and Technology Studies at the University of Alabama. cmajor@ua.edu