Teach Reading Skills with Student-Generated Questions

When faculty see students missing information from a reading, they generally assume that the student did not read the article carefully. However, it might be that the student does not know how to read an academic work. Faculty know how to read because they would not have otherwise succeeded in academia, and consequently, they often assume that students must know how to read academic work as well. But reading academic work for the proper information is a skill developed like any other skill, one which students often lack.

Students have been shown to read articles differently from faculty. Faculty read an academic work for underlying concepts, whereas students often read for facts (Rhem, 2009). In addition, faculty read actively by asking questions, whereas students are just trying to remember information. Thus, teachers can improve their students’ work considerably by teaching them how to read academic work.

Erika G. Offerdahl and Lisa Montplaisir used student-generated reading questions (SGRQs) in their biochemistry course to teach academic reading skills. They asked students to submit one question they had after each reading, focusing on conceptual issues over factual issues. This exercise forced the students to read actively by asking questions and to read for the right types of information. Moreover, it helped the students think like scientists, which is one of the skills the course taught.

The exercises also helped faculty decide what to cover in class. If the faculty member is teaching a hybrid course, handling the course content online and spending in-class time on student engagement, the questions could be used to choose which topics to go over in class. Moreover, students’ questions could be used to create problems for other students to solve in class.

This in-class exercise can easily be applied to a fully online course using discussion boards. The faculty member sets up a discussion board for each reading, with each student posting a question and answering another student’s question. The questions can also be used to inform course revisions by demonstrating areas in which students commonly have difficulty. Faculty often believe that their assessments are sufficient to identify student problem areas, but students often have problems in areas that circle around the assessments and can be found only with open-ended questions.

The researchers further refined the diagnostic instrument by dividing the questions students raised into three levels of thinking skills: those that demonstrated conceptual understanding, those that demonstrated questioning skills, and those that demonstrated metacognition skills—the ability to evaluate the level of one’s own understanding. With this rubric, a teacher can use the student-created questions to evaluate the students’ general level of scientific thinking.

As this was an introductory class, most students scored in the lowest thinking skills category. One would expect that students would move up the spectrum as they progress into later courses, and so by applying the method in different courses, departments can use it to measure the development of thinking skills across a program. Thus, the instrument can be used for program-level assessment of learning outcomes.

Although this study focused on a hard science class, one can easily imagine applying it to other courses. It also fits the online format, because the LMS can host an unlimited number of questions that the instructor can segment by reading, topic, and other categories.

Consider ways to incorporate student-generated readings questions into your courses.

References

Offerdahl, E. & Montplaisir, L. 2015. Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29–38.

Rhem, J. 2009. Deep/Surface Approaches to Learning in Higher Education: A Research Update. Essays on Teaching Excellence Toward the Best in the Academy. v. 21, n. 8

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon

When faculty see students missing information from a reading, they generally assume that the student did not read the article carefully. However, it might be that the student does not know how to read an academic work. Faculty know how to read because they would not have otherwise succeeded in academia, and consequently, they often assume that students must know how to read academic work as well. But reading academic work for the proper information is a skill developed like any other skill, one which students often lack.

Students have been shown to read articles differently from faculty. Faculty read an academic work for underlying concepts, whereas students often read for facts (Rhem, 2009). In addition, faculty read actively by asking questions, whereas students are just trying to remember information. Thus, teachers can improve their students' work considerably by teaching them how to read academic work.

Erika G. Offerdahl and Lisa Montplaisir used student-generated reading questions (SGRQs) in their biochemistry course to teach academic reading skills. They asked students to submit one question they had after each reading, focusing on conceptual issues over factual issues. This exercise forced the students to read actively by asking questions and to read for the right types of information. Moreover, it helped the students think like scientists, which is one of the skills the course taught.

The exercises also helped faculty decide what to cover in class. If the faculty member is teaching a hybrid course, handling the course content online and spending in-class time on student engagement, the questions could be used to choose which topics to go over in class. Moreover, students' questions could be used to create problems for other students to solve in class.

This in-class exercise can easily be applied to a fully online course using discussion boards. The faculty member sets up a discussion board for each reading, with each student posting a question and answering another student's question. The questions can also be used to inform course revisions by demonstrating areas in which students commonly have difficulty. Faculty often believe that their assessments are sufficient to identify student problem areas, but students often have problems in areas that circle around the assessments and can be found only with open-ended questions.

The researchers further refined the diagnostic instrument by dividing the questions students raised into three levels of thinking skills: those that demonstrated conceptual understanding, those that demonstrated questioning skills, and those that demonstrated metacognition skills—the ability to evaluate the level of one's own understanding. With this rubric, a teacher can use the student-created questions to evaluate the students' general level of scientific thinking.

As this was an introductory class, most students scored in the lowest thinking skills category. One would expect that students would move up the spectrum as they progress into later courses, and so by applying the method in different courses, departments can use it to measure the development of thinking skills across a program. Thus, the instrument can be used for program-level assessment of learning outcomes.

Although this study focused on a hard science class, one can easily imagine applying it to other courses. It also fits the online format, because the LMS can host an unlimited number of questions that the instructor can segment by reading, topic, and other categories.

Consider ways to incorporate student-generated readings questions into your courses.

References

Offerdahl, E. & Montplaisir, L. 2015. Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29–38.

Rhem, J. 2009. Deep/Surface Approaches to Learning in Higher Education: A Research Update. Essays on Teaching Excellence Toward the Best in the Academy. v. 21, n. 8