Evidence of Evidence-Based Teaching

professor with small group of students

Evidence-based teaching seems like the new buzzword in higher education. The phrase appears to mean that we’ve identified and should be using those instructional practices shown empirically to enhance learning. Sounds pretty straightforward, but there are lots of questions that haven’t yet been addressed, such as: How much evidence does there need to be to justify a particular strategy, action, or approach? Is one study enough? What about when the evidence is mixed—in some studies the results of a practice are positive and in others they aren’t? In research conducted in classrooms, instructional strategies aren’t used in isolation; they are done in combination with other things. Does that grouping influence how individual strategies function?

Questions like these should prompt more cautious use of the descriptor, but they don’t excuse us from considering the evidence and how it might be incorporated into the teaching-learning activities of our courses. I was impressed by a recent article in which three biologists describe how they created a classroom observation tool that identifies specific, evidence-based behaviors and practices. “PORTAAL [Practical Observation Rubric to Assess Active Learning] is one effort to create a tool that translates research-based best practices into explicit and approachable practices.” (p. 13)Teaching Professor Blog

The tool was designed to assess taped teaching samples, and that’s how the faculty research team used it (with interesting results, highlighted in the December issue of The Teaching Professor). I think the team’s effort to take research findings and translate them into concrete actions is especially commendable. It’s a challenging task given the diversity of research evidence, even in a single area. For example, there are multiple studies that attempt to identify what gets students offering better (more thoughtful, reasoned, higher order) answers. Some of the research has been done with students working in groups, some of it during whole class discussions, and lots in the context of clicker use. To use those findings, a specific yet broadly applicable action must be extracted. In this case, it’s pretty easy: students need time to think before they talk. That is straightforward; but imagine a diverse collection of studies exploring the role of feedback in skill development.


Here’s a sampling of actions from the 21 that appear on the PORTAAL instrument. The first item of each bullet identifies the research finding and the second the specific behaviors, actions, or practices the researchers propose. The article (in an open access journal) lists the studies (in most cases multiple) that support each action. I don’t have space to list all those references in this post, so if you want to read the evidence, I encourage you to consult the journal article.

  • Frequent practice: Observe the number of minutes students have the opportunity to talk about content during class
  • Distributed practice: Observe how often the instructor reminds students to use prior knowledge
  • Immediate feedback: Observe how often the instructor hears student logic (reasons for a particular answer) and responds
  • Time to think before discussing answers: Observe how often students are given time to think before having to talk in groups or in front of the class
  • Student confirmation: Observe how often the instructor delivers explicit positive feedback and/or encouragement
  • Error framing: Observe how often the instructor reminds students that errors are part of learning and not something to be feared

The tool doesn’t offer a comprehensive listing of evidence-based practices. And, as the researchers note, their goal was identifying concrete actions that can be observed. “Some elements may not be perfectly captured by these types of measures.” Take student confirmation, for example. Teachers provide confirmation to students with a variety of nonverbal actions usually done in conjunction with one another, like a smile or nod. You won’t find items like these on PORTAAL, but it starts the work that needs to be done if the research on learning and achievement is to move from research venues to instructional practice. “Following the suggestions outlined in this tool does not guarantee greater student learning, but the tool is a solid, research-supported first step.” (p. 13) Time can be spent standing around waving the evidence-based teaching banner, but it’s more profitably used like this, delving into the details and considering how they might apply to our teaching.

Reference: Eddy, S. L., Converse, M., and Wenderoth, M. P., (2015). PORTAAL: A classroom observation tool assessing evidence-based teaching practices for active learning in large science, technology, engineering, and mathematics classes. Cell Biology Education-Life Sciences Education, 14 (Summer), 1-16. Access full article: http://www.lifescied.org/content/14/2/ar23.full

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon