Points and Leaderboards: Tale of Two Platforms

Gamification has become a hot topic as instructors and instructional designers work to create engaging learning experiences in online course environments. While there are a number of key features in any gamified system, the awarding of points seems to garner the most attention. Students can earn points for the completion of specifically defined activities, and accumulated points indicate progress or engagement with an activity or set of activities.

To illustrate different ways that points can be deployed in a gamified learning system, we compare two platforms: one based on progress through activities and one based on engagement with other students. Each has its advantages and disadvantages, and both can be used to increase student motivation and performance in a course.

The first platform is Curatr. Of the two platforms we are comparing, Curatr’s workflow is much more gamelike. Course developers begin by mapping out a path for players/students to follow. There is a set beginning and end. Students navigate through a succession of levels by completing learning modules that comprise a related series of learning objects. All completed activities earn experience points (XP), and by earning a predetermined number of points, the player/student progresses from one level to the next until the course is completed.

The second platform is Yellowdig. Yellowdig isn’t structured like a game and isn’t meant to deliver fully scripted learning experiences. Instead, Yellowdig mimics social media platforms such as Facebook and Twitter. Yellowdig boards provide a Facebook-like interface that enables students to share course-relevant ideas, images, videos, and links in an intuitive and familiar environment. Also, like Facebook, social reinforcement of these sharing behaviors, such as “likes” and comments, provides participants with the motivation to continue interacting. Yellowdig uses XP to augment the social rewards and provide quantifiable evidence of student participation.

Making sense of XP

In each of these gamified systems, XP are an accounting device, one tangible way to measure how well students are meeting the expectations that have been designed into the system. In the case of Curatr, they primarily indicate progress; while in Yellowdig, they reflect engagement. In both cases, XP function as motivational feedback, either driving the student through the course or driving the interactions among students. In neither case should XP be conflated with grades.

Grades tend to be product-based, while XP are designed to be process-based. Grading usually involves a set of qualitative guidelines and an instructor’s evaluation of a student work based on those guidelines. XP are instantaneous and are awarded based on a predetermined, easily measurable set of behaviors. Students understand that if they perform the necessary behaviors, they will receive the points. There is no waiting for points to be awarded and no implicit judgment call by the instructor.

Problems tend to arise when course designers try to imbue XP with grade-like qualities. We discovered that this was much more likely to happen in Yellowdig than in Curatr. Because Yellowdig was created as a substitute for the traditional discussion board, instructors would design assignments for Yellowdig that mirrored discussion board assignments. In a traditional discussion board, assignments are typically assessed using rubrics that carefully outline an instructor’s expectations for student contributions. The more closely a student’s contribution meets the objectives articulated in the rubric, the more points he or she will receive. The automated points system in Yellowdig is not designed to make those quality determinations. Instructors can control point distribution only based on observable behaviors such as number of words in a pin, number of comments made, or number of times someone liked a pin.

Many instructors worried that students would take advantage of a system that allows them to keep posting and commenting “just to rack up points.” They were also wary of substituting quantity measures for quality measures. These concerns would have been just as valid in Curatr as in Yellowdig, but the issue never came up. This is most likely because Curatr awards points for behaviors that instructors don’t often grade. Instructors typically administer tests and use test grades as a proxy for whether students have listened to a lecture, read a chapter, and understood the material therein. In Curatr, students are awarded XP for these preparatory behaviors. In this context, the XP reinforce behaviors we want to encourage but don’t replace summative assessments.

The Yellowdig points system, on the other hand, requires instructors to accept automatically awarded XP in place of rubric-based quality points. The absence of clear goals, no predetermined set of behaviors to earn XP, and no clear path on which to mark progress had many course developers feeling that they would be setting up a system that awards XP indiscriminately. At the outset, we had no real answer to these concerns other than a “pie-in-the-sky” set of assurances about the benefits of a collaborative social learning environment.

Through the efforts of our course developers/instructors, we can now point to more concrete evidence of the efficacy of Yellowdig boards. The pilot group of 10 course developers/instructors all reported a significant increase in student engagement in course discussions. Quality also improved, with instructors reporting that, compared with the traditional discussion board, the number of “gratuitous” comments (i.e., “nice post!”) was considerably reduced. One instructor estimated that the 50/50 split between substantive and gratuitous contributions he experienced on the traditional discussion board had changed to an 80/20 split in Yellowdig.

Instructors also reported that students “liked” (upvoted) contributions made by their peers only if they actually believed those contributions had merit. Poor-quality contributions were routinely ignored, which meant that students who weren’t fully engaged in the discussion were not able to accrue the same number of points as better-performing students. Some of these students, who in more traditional circumstances might have blamed instructor bias for their poor assessments, began looking more carefully at posts that were getting more positive attention. Many took advantage of the opportunity to improve their own performance by modeling future posts after more popular posts. The social aspects of this gamified learning system also influenced instructors. Instructors reported taking a second look at student contributions when they noticed them getting upvoted by other students. They also appreciated the constructive and insightful comments students would provide each other.

Leaderboards and what points mean

We designed learning experiences in Curatr using points as a means of moving learners along a predetermined path. The earned XP were indicative of a learner’s progress on that path. Therefore, a student’s place on the XP leaderboard showed his or her progress relative to others on the same path. The learning experiences in Yellowdig were designed to encourage collaborative knowledge-building activities among students in a course. Earned XP indicated the level of participation based not only on individual contributions to these activities but also on a student’s role in creating an engaging and meaningful experience for a community of learners. In this context, a student’s place on the XP leaderboard reflected his or her place within this community of learners.

All in all, our experience using these platforms was extremely positive. Neither one could be said to outperform the other because both were extremely useful for creating the type of learning experiences for which they were designed.

Andrew Feldstein is the associate VP of Instruction Technology, and Gulinna A is an instructional designer at Fort Hays State University. 

 

Leave a Reply

Logged in as Julie Evener. Edit your profile. Log out? Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Love ’em or hate ’em, student evaluations of teaching (SETs) are here to stay. Parts <a href="https://www.teachingprofessor.com/free-article/its-time-to-discuss-student-evaluations-bias-with-our-students-seriously/" target="_blank"...

Since January, I have led multiple faculty development sessions on generative AI for faculty at my university. Attitudes...
Does your class end with a bang or a whimper? Many of us spend a lot of time crafting...

Faculty have recently been bombarded with a dizzying array of apps, platforms, and other widgets that...

The rapid rise of livestream content development and consumption has been nothing short of remarkable. According to Ceci...

Feedback on performance has proven to be one of the most important influences on learning, but students consistently...

wpChatIcon