Incentivizing High-Quality Reviews in Peer-to-Peer Settings: A Feasibility Study with Student Assignments

Incentivizing High-Quality Reviews in Peer-to-Peer Settings: A Feasibility Study with Student Assignments

J.Z. Yue, K. Böhm, S. von Stackelberg
DOI: 10.4018/ijvcsn.2014010101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Peer reviewing has been touted as a popular instrument to identify good contributions in communities. A problem of peer reviewing is that reviewers have little incentive to make significant effort. To address this problem, the authors introduce a new variant of peer reviewing. It differs from conventional peer reviewing in two ways: First, peers who have made a contribution must also review the contributions made by others. Second, each contributor issues ratings regarding the reviews he has received. To incentivize reviewing, they design an assessment scheme which does not only assess the quality of the contribution made by a peer, but also the quality of the reviews he has submitted. The scheme ranks peers by overall performance, and the ranks determine their payoff. Such a setting gives way to competition among peers. A core challenge however is to elicit objective reviews and ratings. The authors consider two issues which are in the way of this objectiveness: First, they expect preference bias in ratings, i.e., peers tend to prefer reviews with high scores, but dislike reviews with low scores. Second, strategic peers might defame others in their reviews or ratings. This is because they perceive others as competitors. In this paper, they propose a heuristic to address these issues. Further, they carry out a user study in a lecture scenario to evaluate their scheme. It shows that students are incentivized to submit high-quality reviews and that their scheme is effective to evaluate the performance of students.
Article Preview
Top

Introduction

Peer reviewing has been an important instrument to identify good contributions in communities, such as papers for scientific conferences or journals. It also finds its applications in web communities, in order to single out contributions that are particularly valuable, such as good articles published in web encyclopedias like Wikipedia, or good answers in question and answer forums like Yahoo! Answers. In the past, however, there has been criticism regarding the quality of peer reviews (Roy, 1985; Tite & Schroter, 2007). One issue is the lack of incentives to exert effort to write reviews. For example, in scientific conferences, it requires considerable intellectual effort and time to write good reviews and to come up with valuable comments. But in most cases, there hardly is any reward for doing so. Designing a review process is challenging.

To incentivize high-quality reviews, we propose a new variant of peer reviewing and study its characteristics in this paper. It differs from conventional peer reviewing in two ways: First, individuals (i.e., peers) who have made a contribution (have submitted a paper, for instance) must also review contributions made by others and give scores to them. Second, each contributor must issue ratings regarding the reviews he has received, i.e., reviewing the reviews. We dub this principle two-phase peer reviewing. Any review process incorporates a so-called assessment scheme, i.e., a scheme that quantifies the quality of the contributions (to make acceptance decisions, for instance). To incentivize significant effort of peers for reviewing as well, our assessment scheme not only evaluates the quality of peers’ contributions, but also the quality of their reviews. We expect peers to accept that their reviewing performance is a part of their total performance. This also happens in other communities, where remunerations of participants depend on different kinds of contributions (Agichtein et al., 2008). For example, a user of Yahoo! Answers can earn points by raising questions, by answering questions of others, and by commenting on the answers.

Our scheme introduced in this paper ranks peers by overall performance, and the ranks determine their payoff. Environments where the payoff is contingent on the rank are referred to as tournaments in the literature (Lazear & Rosen, 1981). In a nutshell, tournaments give way to competition among peers, cf. Related Work.

In our setting, a core challenge is to elicit ratings which represent the objective perception of the raters regarding the reviews. In this paper, we consider two issues which influence this objectiveness. First, we expect that raters naturally prefer reviews with high scores and dislike reviews with low scores (Gibson et al., 2008; Kuehne et al., 2010). We call this bias in ratings preference bias, and we aim to neutralize it. Second, we expect an even lower objectiveness of reviews and ratings because of the tournament character of our setting. A strategic peer might defame his competitors by issuing low review scores or ratings systematically, aiming to improve his own rank. To avoid such manipulations, we need rewards for honesty. Prelec (2004) and Miller et al. (2005) have proposed so-called honest feedback mechanisms. There, for instance, respondents gain higher rewards if they tell their real opinion (i.e., honest feedback) regarding a certain product, compared to dishonest feedback. However, these mechanisms rely on the assumption that respondents do not have any interest in the resulting ranks. There is no competition among respondents. Thus, these mechanisms are not suitable for our setting. Hence, the following research questions arise: How can we use scores and ratings issued by competitors to assess the quality of contributions and the quality of reviews? How can we motivate reviewers to provide high-quality reviews? How can we motivate raters to issue objective ratings?

As a first contribution of this paper, we propose an assessment scheme to quantify the performance of peers. The total performance of a peer depends on two constituents: his performance when making a contribution and his performance when reviewing. The assessment of his performance as a contributor depends on the scores he has received, and the scores are weighted with the quality of the corresponding reviews. The assessment of a peer’s performance as a reviewer is the average quality of the reviews he has submitted.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing