When faculty see students missing information from a reading, they generally assume that the student did not read the article carefully. However, it might be that the student does not know how to read an academic work. Faculty know how to read because they would not have otherwise succeeded in academia, and consequently, they often assume that students must know how to read academic work as well. But reading academic work for the proper information is a skill developed like any other skill, one which students often lack.
Students have been shown to read articles differently from faculty. Faculty read an academic work for underlying concepts, whereas students often read for facts (Rhem, 2009). In addition, faculty read actively by asking questions, whereas students are just trying to remember information. Thus, teachers can improve their students' work considerably by teaching them how to read academic work.
Erika G. Offerdahl and Lisa Montplaisir used student-generated reading questions (SGRQs) in their biochemistry course to teach academic reading skills. They asked students to submit one question they had after each reading, focusing on conceptual issues over factual issues. This exercise forced the students to read actively by asking questions and to read for the right types of information. Moreover, it helped the students think like scientists, which is one of the skills the course taught.
The exercises also helped faculty decide what to cover in class. If the faculty member is teaching a hybrid course, handling the course content online and spending in-class time on student engagement, the questions could be used to choose which topics to go over in class. Moreover, students' questions could be used to create problems for other students to solve in class.
This in-class exercise can easily be applied to a fully online course using discussion boards. The faculty member sets up a discussion board for each reading, with each student posting a question and answering another student's question. The questions can also be used to inform course revisions by demonstrating areas in which students commonly have difficulty. Faculty often believe that their assessments are sufficient to identify student problem areas, but students often have problems in areas that circle around the assessments and can be found only with open-ended questions.
The researchers further refined the diagnostic instrument by dividing the questions students raised into three levels of thinking skills: those that demonstrated conceptual understanding, those that demonstrated questioning skills, and those that demonstrated metacognition skills—the ability to evaluate the level of one's own understanding. With this rubric, a teacher can use the student-created questions to evaluate the students' general level of scientific thinking.
As this was an introductory class, most students scored in the lowest thinking skills category. One would expect that students would move up the spectrum as they progress into later courses, and so by applying the method in different courses, departments can use it to measure the development of thinking skills across a program. Thus, the instrument can be used for program-level assessment of learning outcomes.
Although this study focused on a hard science class, one can easily imagine applying it to other courses. It also fits the online format, because the LMS can host an unlimited number of questions that the instructor can segment by reading, topic, and other categories.
Consider ways to incorporate student-generated readings questions into your courses.
References
Offerdahl, E. & Montplaisir, L. 2015. Student-Generated Reading Questions: Diagnosing Student Thinking with Diverse Formative Assessments, Biochemistry and Molecular Biology Education, 42 (1), 29–38.
Rhem, J. 2009. Deep/Surface Approaches to Learning in Higher Education: A Research Update. Essays on Teaching Excellence Toward the Best in the Academy. v. 21, n. 8