Taking Measure of Our DEI Efforts

Credit: iStock.com/SolStock
Credit: iStock.com/SolStock

With all the unrest and violence that has rocked our country over the past few years, this fall you might be giving added attention to issues of diversity, equity, and inclusion (DEI) in your courses. Maybe peers or administrators have encouraged you to think about ways to infuse more disparate viewpoints into your course designs and materials. And if you’ve needed help, there’s been no shortage of higher ed columns and seminars devoted to the topic. Even if our changes are small, such a response to the calamities of our times makes perfect sense.

But I’ve wondered if we shouldn’t be a bit circumspect about the actual effects these modifications will have on learners. In a recent column in TIME magazine, Anne Helen Petersen describes how many of the DEI efforts in the private sector have possibly done more harm than good. According to Petersen, such programs amount to little more than window dressing, giving workers the impression that corporate is listening, but doing so with programming actually designed to make employees stop thinking about inequalities altogether. “A three-hour webinar will not create a culture of inclusion,” she writes. “Superficiality is part of the point.”

We may not see our own DEI efforts as akin to those described by Petersen, but depending on what we’re doing, maybe we should. Token efforts to infuse another viewpoint or to substitute a few new readings for others are probably having less impact on our learners than we’d like to admit.

Take what happened when some political science instructors introduced an exercise targeted at informing their students on the underrepresentation of women in American politics (Nemerever et al. 2021). The exercise was straightforward, as were its measures of student learning. Three sections of an introductory course at a “gender-balanced” university were treatment groups, while another three served as controls. At a different institution, this one “predominantly male,” there was one treatment and control section each. Treatment groups read an article about US women’s abysmal global ranking in terms of representation in government, while control groups read a piece about expanding the House of Representatives that lacked any gender emphasis. Students responded with short reaction papers.

In class, the students then worked in small teams on a worksheet (the same for treatment and control groups) designed to measure the effects of the readings, especially as they pertained to political representation in America. The researchers logically hypothesized that the treatment groups would be more likely to discuss gender-related representation, and that the effects would be greater at the gender-balanced institution than at the male-dominant one. The reaction papers, worksheets and in-class discussions were scrutinized to test these assumptions.

It turns out the researchers were only partially right. The treatment groups did, in fact, discuss gender more; but the students at the predominantly male institution examined gender to a greater degree than did their counterparts at the gender-balanced university.

Even more surprising, the gender-specific reading “did not induce any changes in factual knowledge or lasting attitudinal effects” in the treatment groups at either institution (5). In fact, the effects were negative at the male-dominant university, a result the researchers call “peculiar” (5). So, although the experiment was a short-term success in getting students thinking about and discussing gender in politics, it failed to have any measurable longer-term impact. The authors conclude that “single-shot lessons on women’s representation in political science classrooms [are] insufficient to change perceptions, attitudes, and motivations underlying near ubiquitous political gender imbalances” (1).

Some of my own research complements these findings. From a recent national survey that I co-directed on the public’s views and uses of history (Burkholder and Schaffer 2021), I learned that respondents felt that women are largely neglected by historians. In fact, given a choice of nine topics, women were ranked as the one most in need of greater attention, while men were perceived as the most over-analyzed. None of that was unexpected. But a surprise came when we considered the gender cross-tabulations: female respondents were nearly twice as likely as males not to have an opinion on these matters, be it men’s or women’s history. Perhaps such apathy is an artifact of feeling ignored for so long, but we really don’t know.

The findings of Nemerever et al. caused me to ponder the effects of some of my own DEI-related exercises. In one case, students examined treatment given to Cleopatra in a variety of textbooks. Although she received greater or lesser degrees of attention, learners remarked that it was often in the form of sidebars, casting her as an interesting woman in the ancient world, but literally and figuratively marginalizing her in the process. That’s a potentially eye-opening finding. Yet, I could be fooling myself about the students’ takeaways. There may have been no lasting impact, or perhaps even a reverse one. The lesson for some of them could be that women really don’t belong in the main historical narrative.

I’ve written earlier (Burkholder 2020) about how some of our most basic assumptions regarding teaching, learning and the world around us actually have no evidentiary basis, or have even been disproven. Course revisions to address major social problems can make us feel good about ourselves and please our superiors, but their impacts on students could be marginal or even contrary to our intentions. DEI effects can be measured, and the research adduced here shows us how. If we fail to do that, we risk mimicking those hollow corporate diversity “solutions” that Petersen warns us about.

References

Burkholder, Pete. 2020. “What You Know that Just Ain’t So.” The Teaching Professor, April 6, 2020. https://qa.teachingprofessor.com/topics/student-learning/what-you-know-that-just-aint-so

Burkholder, Pete, and Dana Schaffer. 2021. History, the Past, and Public Culture: Results from a National Survey. American Historical Association. https://www.historians.org/history-culture-survey

Nemerever, Zoe, Kelly Piazza, and Seth Hill. 2021. “Incorporating Gender Politics into Introduction to U.S. Government Curriculum.” College Teaching (ahead-of-print version): 1–6. https://doi.org/10.1080/87567555.2021.1950603

Petersen, Anne Helen. 2021. “Companies Are Embracing Empathy to Keep Employees Happy. It’s Not That Easy.” TIME, July 22, 2021. https://time.com/6082524/corporate-empathy-trap


Pete Burkholder, PhD, is professor of history at Fairleigh Dickinson University, where he served as founding chair of the faculty teaching development program from 2009 to 2017. He is on the editorial board of The Teaching Professor, is a consulting editor for College Teaching, and serves on the national advisory board of the Society for History Education.

One Response

  1. This is interesting and important but the unfortunate impression that it leaves is that this work is not worth doing. One can easily start formulating hypotheses about why the equity-seeking groups mentioned above don’t want to engage about it. Maybe because it’s hard work and puts the focus on them in a way they would rather avoid. Given a different approach they might feel differently. Likewise, maybe the males in the first experiment whose results because “negative” were furious that they were being forced to think about something that makes them feel some kind of guilt, which maybe they need to get over. , or maybe the approach needs to be changed. Hope this helps.

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon