Teaching Practices Inventory Provides Tool to Help You Examine Your Teaching

Here’s a great resource: the Teaching Practices Inventory. It’s an inventory that lists and scores the extent to which research-based teaching practices are being used. It’s been developed for use in math and science courses, but researchers Carl Wieman and Sarah Gilbert suggest it can be used in engineering and social sciences courses, although they have not tested it there. I suspect it has an even wider application. Most of the items on the inventory are or could be practiced in most disciplines and programs.

The article (in an open access journal and available on the website above) provides a detailed account of how the inventory was developed and has been tested so far. Carl Wieman is a Nobel Prize winner in physics who in recent years has been working on a variety of STEM projects. This article illustrates the high caliber of his work, completed with a variety of colleagues.

The inventory takes 10 to 15 minutes to complete (53% of the research cohort took it in 10 minutes or less) and is designed for use by individual faculty. It is a self-report inventory, with the power to promote a comprehensive review of and reflection on teaching practices. Inventory items are organized into eight categories: 1) course information provided to students; 2) supporting materials provided to students; 3) in-class features and activities; 4) assignments; 5) feedback and testing; 6) other (such as pre-post testing); 7) training and guidance of TAs; and 8) collaboration or sharing in teaching.

Of course, the insights provided by the inventory are a function of the truthfulness with which it’s completed, but if you’re using it on your own, there is no reason to be less than candid. The article reports on efforts to test the validity of faculty responses and researchers found high levels of consistency between individual answers and those provided by external reviewers.

The inventory comes with a scoring rubric that gives points (of varying quantity) for practices documented by research to improve student learning. Not all practices on the inventory merit points. It would be best to first take the inventory (a clean copy is available here), score it using the rubric in Appendix 1 of the article, and then read the article, which explains and justifies the point values with references to the relevant research. The article also contains the scoring results from 179 inventories completed by faculty in five different science and math departments. Those data are not normative but do offer something against which individual scores can be benchmarked.

The article explains how completed inventories can be used to look at practices within a department or across several of them. The inventory only indicates whether a practice is being used. It says nothing about the quality of the implementation. There’s an interesting discussion in the article about how the research team tried to look at quality and how difficult they discovered it was to ascertain.

Should teachers be doing (or have their students doing) all 51 of the research-supported practices and feel guilty if they aren’t? The authors point out that lots of the items are routinely used. And although the practices are listed individually on the inventory, many are related, overlapping and mutually reinforcing. If some of the practices are not being used, they can be implemented incrementally.

How strongly can I say that you really ought to take this inventory? At the very least, spend time looking at it. The data it provides is such a contrast to the normally judgmental, summative feedback provided by end-of-course student ratings. The inventory is predictive only in the sense that it identifies practices that research has shown help students learn—and who among us wouldn’t want to use those kinds of practices? Practices are things teachers do and they involve concrete actions, which is why an inventory like this can effectively guide the improvement process.

The authors say that the inventory “provides a rich and detailed picture of what practices are used in a course.” (p. 562) Normally, that’s a claim I’d read with some cynical suspicion, but it’s an apt description in this case.

Reference: Wieman, C., and Gilbert, S. (2014). The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. Cell Biology Education—Life Sciences Education, 13 (Fall), 552-569.

© Magna Publications. All rights reserved.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon
Here’s a great resource: the Teaching Practices Inventory. It’s an inventory that lists and scores the extent to which research-based teaching practices are being used. It’s been developed for use in math and science courses, but researchers Carl Wieman and Sarah Gilbert suggest it can be used in engineering and social sciences courses, although they have not tested it there. I suspect it has an even wider application. Most of the items on the inventory are or could be practiced in most disciplines and programs. The article (in an open access journal and available on the website above) provides a detailed account of how the inventory was developed and has been tested so far. Carl Wieman is a Nobel Prize winner in physics who in recent years has been working on a variety of STEM projects. This article illustrates the high caliber of his work, completed with a variety of colleagues. The inventory takes 10 to 15 minutes to complete (53% of the research cohort took it in 10 minutes or less) and is designed for use by individual faculty. It is a self-report inventory, with the power to promote a comprehensive review of and reflection on teaching practices. Inventory items are organized into eight categories: 1) course information provided to students; 2) supporting materials provided to students; 3) in-class features and activities; 4) assignments; 5) feedback and testing; 6) other (such as pre-post testing); 7) training and guidance of TAs; and 8) collaboration or sharing in teaching. Of course, the insights provided by the inventory are a function of the truthfulness with which it’s completed, but if you’re using it on your own, there is no reason to be less than candid. The article reports on efforts to test the validity of faculty responses and researchers found high levels of consistency between individual answers and those provided by external reviewers. The inventory comes with a scoring rubric that gives points (of varying quantity) for practices documented by research to improve student learning. Not all practices on the inventory merit points. It would be best to first take the inventory (a clean copy is available here), score it using the rubric in Appendix 1 of the article, and then read the article, which explains and justifies the point values with references to the relevant research. The article also contains the scoring results from 179 inventories completed by faculty in five different science and math departments. Those data are not normative but do offer something against which individual scores can be benchmarked. The article explains how completed inventories can be used to look at practices within a department or across several of them. The inventory only indicates whether a practice is being used. It says nothing about the quality of the implementation. There’s an interesting discussion in the article about how the research team tried to look at quality and how difficult they discovered it was to ascertain. Should teachers be doing (or have their students doing) all 51 of the research-supported practices and feel guilty if they aren’t? The authors point out that lots of the items are routinely used. And although the practices are listed individually on the inventory, many are related, overlapping and mutually reinforcing. If some of the practices are not being used, they can be implemented incrementally. How strongly can I say that you really ought to take this inventory? At the very least, spend time looking at it. The data it provides is such a contrast to the normally judgmental, summative feedback provided by end-of-course student ratings. The inventory is predictive only in the sense that it identifies practices that research has shown help students learn—and who among us wouldn’t want to use those kinds of practices? Practices are things teachers do and they involve concrete actions, which is why an inventory like this can effectively guide the improvement process. The authors say that the inventory “provides a rich and detailed picture of what practices are used in a course.” (p. 562) Normally, that’s a claim I’d read with some cynical suspicion, but it’s an apt description in this case. Reference: Wieman, C., and Gilbert, S. (2014). The teaching practices inventory: A new tool for characterizing college and university teaching in mathematics and science. Cell Biology Education—Life Sciences Education, 13 (Fall), 552-569.

© Magna Publications. All rights reserved.