Engaging with the Engagement Issue

Credit: iStock.com/quavondo
Credit: iStock.com/quavondo

There’s no shortage of materials pertaining to student engagement in higher ed. I’ve attended teaching conferences where anywhere between one-third and one-half of the sessions could be slotted under the engagement rubric. I’ve further found, while conducting teaching observations, reviewing course syllabi, and reading teaching philosophy statements, that the engagement issue looms large. Although it’s sometimes marshaled effectively to attain learning goals, engagement can too easily become an end unto itself instead of a means to an end. It seems to have become an article of faith that if students are visibly engaged, the learning inevitably follows.

This is why a recent article by Jose Eos Trinidad et al. (2020) is so interesting. Through a series of interviews, the authors sought to find out whether engaged learning translated to effective learning from the perspective of students. A number of attendant problems surfaced in the process. Among them, how does one define “engaging”? (The researchers settled on practices the students enjoyed.) And what’s “effective”? (Here, student perceptions were the only marker.) Having collected the evidence, the authors disaggregated engaging from effective, finding that the two don’t necessarily go hand-in-hand in the minds of those being taught.

Trinidad et al. arrange their data into a four-zone matrix. In the “engaging and effective” zone are such practices as recitation sections, interactive lecturing, the use of visual aids, and real-life applications. “Unengaging but effective” includes “slow and repetitive” lectures, quizzes, individual research, and text-based discussions. Interviewees voiced a strong preference for memorization of small bits of material, even though they reported not learning much from the activity, thus earning it an “engaging but ineffective” designation. Practices falling into the “unengaging and ineffective” zone included graded recitation sessions, “boring” lectures, and listening to student reports.

Certainly, one can quibble with some of the results above. Students invariably like visual aids but often under the flawed assumption that they’re “visual learners.” And the difference between “slow and repetitive” lectures and “boring” ones isn’t clear-cut. But other results probably aren’t surprising: the ineffectiveness of student reports and the preference for memorizing isolated pieces of information are two examples. The greatest liability in Trinidad et al.’s research is the fact that it relies exclusively on indirect measures of learning, a problem the authors are well aware of. “This is a limitation,” they write, “since studies have shown that students’ self-assessments do not always align with their actual performance” (3).

The additional step of correlating student perceptions with direct measures of learning is crucial, though more difficult. Having done that sort of research myself, I can readily attest to how much more painstaking and time-consuming it is than collecting perceptions from surveys and interviews. For that reason, it’s more expedient for faculty to devise and stick with practices that resonate with students as “engaging” or “enjoyable,” believing that quality learning simply must be occurring in such contexts.

I can use myself as a ready example of how problematic this mindset can be. Many years ago, I devised a digital storytelling assignment for my introductory-level history classes. From the very first iteration, the exercise clearly “worked” from an engagement and enthusiasm standpoint, ably harnessing students’ learning preferences, popular culture, and course materials in ways that other approaches never had. Indirect measures of learners’ experiences further reinforced this putative success: they found it thoroughly engaging and they reported learning a great deal from the work, even if direct evidence of that learning was elusive. So captivated was I by students’ enthusiasm that I used engagement as the primary marker of educational success.

When I backward redesigned the courses years later, it became painfully obvious that the digital stories weren’t addressing my own stated learning goals adequately. Let me be clear: there’s nothing inherently wrong with digital storytelling as a vehicle for learning. Yet, given my revised and more ambitious educational goals and how much class time I needed to devote to digital stories to make them happen, I simply couldn’t justify the practice any longer. It was with regret but necessity that I deleted the assignment from my assessment scheme.

I fully admit that I’d rather have engaged students than unengaged ones. That said, Trinidad and his colleagues’ work signals that engagement isn’t all it’s cracked up to be. An “engaged” student may be one who doesn’t view an activity as especially useful but finds it enjoyable and perceives that as having value. We shouldn’t be surprised by students’ responses, because this isn’t just an educational issue. Social media and search engine companies create sophisticated algorithms “optimized for engagement,” oftentimes with negative results. Such engagement, playing to base instincts, “undermines democracy and public health” as well as “increases political polarization and fosters hostility to expertise and facts” (McNamee 2020, 21).

Rest assured, those multibillion dollar companies have gathered direct evidence of the effects of engagement on their users. Educators likewise must go the extra step, utilizing SoTL principles, to measure the actual effects of engaged and unengaged learning. Trinidad et al. provide a useful framework to begin that process.

References

McNamee, Roger. 2020. “Facebook Cannot Fix itself.” Time, June 4, 2020. https://time.com/5847963/trump-section-230-executive-order.

Trinidad, Jose Eos, Galvin Radley Ngo, Ana Martina Nevada, and Jeanne Angelica Morales. (2020). “Engaging and/or Effective? Students’ Evaluation of Pedagogical Practices in Higher Education.” College Teaching. https://doi.org/10.1080/87567555.2020.1769017.

Pete Burkholder, PhD, is professor of history at Fairleigh Dickinson University, where he served as founding chair of the faculty teaching development program from 2009 to 2017. He is on the editorial board of The Teaching Professor, is a consulting editor for College Teaching, and serves on the national advisory boards of the Society for History Education and ISSOTL-H: The International Society for SoTL in History.


To sign up for weekly email updates from The Teaching Professor, visit this link.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

There’s no shortage of materials pertaining to student engagement in higher ed. I’ve attended teaching conferences where anywhere between one-third and one-half of the sessions could be slotted under the engagement rubric. I’ve further found, while conducting teaching observations, reviewing course syllabi, and reading teaching philosophy statements, that the engagement issue looms large. Although it’s sometimes marshaled effectively to attain learning goals, engagement can too easily become an end unto itself instead of a means to an end. It seems to have become an article of faith that if students are visibly engaged, the learning inevitably follows.

This is why a recent article by Jose Eos Trinidad et al. (2020) is so interesting. Through a series of interviews, the authors sought to find out whether engaged learning translated to effective learning from the perspective of students. A number of attendant problems surfaced in the process. Among them, how does one define “engaging”? (The researchers settled on practices the students enjoyed.) And what’s “effective”? (Here, student perceptions were the only marker.) Having collected the evidence, the authors disaggregated engaging from effective, finding that the two don’t necessarily go hand-in-hand in the minds of those being taught.

Trinidad et al. arrange their data into a four-zone matrix. In the “engaging and effective” zone are such practices as recitation sections, interactive lecturing, the use of visual aids, and real-life applications. “Unengaging but effective” includes “slow and repetitive” lectures, quizzes, individual research, and text-based discussions. Interviewees voiced a strong preference for memorization of small bits of material, even though they reported not learning much from the activity, thus earning it an “engaging but ineffective” designation. Practices falling into the “unengaging and ineffective” zone included graded recitation sessions, “boring” lectures, and listening to student reports.

Certainly, one can quibble with some of the results above. Students invariably like visual aids but often under the flawed assumption that they’re “visual learners.” And the difference between “slow and repetitive” lectures and “boring” ones isn’t clear-cut. But other results probably aren’t surprising: the ineffectiveness of student reports and the preference for memorizing isolated pieces of information are two examples. The greatest liability in Trinidad et al.’s research is the fact that it relies exclusively on indirect measures of learning, a problem the authors are well aware of. “This is a limitation,” they write, “since studies have shown that students’ self-assessments do not always align with their actual performance” (3).

The additional step of correlating student perceptions with direct measures of learning is crucial, though more difficult. Having done that sort of research myself, I can readily attest to how much more painstaking and time-consuming it is than collecting perceptions from surveys and interviews. For that reason, it’s more expedient for faculty to devise and stick with practices that resonate with students as “engaging” or “enjoyable,” believing that quality learning simply must be occurring in such contexts.

I can use myself as a ready example of how problematic this mindset can be. Many years ago, I devised a digital storytelling assignment for my introductory-level history classes. From the very first iteration, the exercise clearly “worked” from an engagement and enthusiasm standpoint, ably harnessing students’ learning preferences, popular culture, and course materials in ways that other approaches never had. Indirect measures of learners’ experiences further reinforced this putative success: they found it thoroughly engaging and they reported learning a great deal from the work, even if direct evidence of that learning was elusive. So captivated was I by students’ enthusiasm that I used engagement as the primary marker of educational success.

When I backward redesigned the courses years later, it became painfully obvious that the digital stories weren’t addressing my own stated learning goals adequately. Let me be clear: there’s nothing inherently wrong with digital storytelling as a vehicle for learning. Yet, given my revised and more ambitious educational goals and how much class time I needed to devote to digital stories to make them happen, I simply couldn’t justify the practice any longer. It was with regret but necessity that I deleted the assignment from my assessment scheme.

I fully admit that I’d rather have engaged students than unengaged ones. That said, Trinidad and his colleagues’ work signals that engagement isn’t all it’s cracked up to be. An “engaged” student may be one who doesn’t view an activity as especially useful but finds it enjoyable and perceives that as having value. We shouldn’t be surprised by students’ responses, because this isn’t just an educational issue. Social media and search engine companies create sophisticated algorithms “optimized for engagement,” oftentimes with negative results. Such engagement, playing to base instincts, “undermines democracy and public health” as well as “increases political polarization and fosters hostility to expertise and facts” (McNamee 2020, 21).

Rest assured, those multibillion dollar companies have gathered direct evidence of the effects of engagement on their users. Educators likewise must go the extra step, utilizing SoTL principles, to measure the actual effects of engaged and unengaged learning. Trinidad et al. provide a useful framework to begin that process.

References

McNamee, Roger. 2020. “Facebook Cannot Fix itself.” Time, June 4, 2020. https://time.com/5847963/trump-section-230-executive-order.

Trinidad, Jose Eos, Galvin Radley Ngo, Ana Martina Nevada, and Jeanne Angelica Morales. (2020). “Engaging and/or Effective? Students’ Evaluation of Pedagogical Practices in Higher Education.” College Teaching. https://doi.org/10.1080/87567555.2020.1769017.

Pete Burkholder, PhD, is professor of history at Fairleigh Dickinson University, where he served as founding chair of the faculty teaching development program from 2009 to 2017. He is on the editorial board of The Teaching Professor, is a consulting editor for College Teaching, and serves on the national advisory boards of the Society for History Education and ISSOTL-H: The International Society for SoTL in History.


To sign up for weekly email updates from The Teaching Professor, visit this link.