Article Preview
Top1. Introduction
The assessment of the credibility of online sources (e.g., creators of and to content) is generally more complicated than in traditional media because “of the multiplicity of sources [such as contributors] embedded in the numerous layers of online dissemination” (Sundar, 2008, p. 74). Social media, such as Facebook and Twitter, have enabled information to flow faster than ever before, consequently increasing the speed in which false information can spread online (Allcott & Gentzkow, 2017). False information, often referred to as fake news, may relate to a vast array of phenomena, ranging from hoaxes to sensationalism, see Tandoc Jr, Wei Lim, and Ling (2018) for a review. In this article, we cautiously adopt the term ‘fake news’ as “the online publication of intentionally or knowingly false statements of fact” (Klein & Wueller, 2017, p. 6).
Fake news is regarded by the World Economic Forum (WEF) to be one of the biggest challenges to contemporary societies (Vicario et al., 2016). With the increasing spread of fake news, it has become necessary to identify ways to validate the information that users find online. Research has shown that social media shape our memory in a way where people tend to conform to a majority recollection, even if it proves to be wrong (Spinney, 2017). The challenge has become that people may not always be aware of what information should be regarded as credible or non-credible, especially with regards to deep fake video clips, i.e., making a person appear to say or do something they did not (Maras & Alexandrou, 2019). Information that is not credible may lead to inaccurate beliefs and result in misperception, which could undermine democratic decision-making processes (Allcott & Gentzkow, 2017; Hameleers & van der Meer, 2019).
Because it has become difficult for individuals to assess and evaluate information they encounter online (Allcott & Gentzkow, 2017; Metzger, Flanagin, Eyal, Lemus, & McCann, 2003; Robins & Holmes, 2008), a number of online services (e.g., Snopes, Hoax-Slayer, Politifact, Factcheck) have emerged in an effort to evaluate the credibility of online information through a process that involves fact-checking by an editorial team. However, research has shown that these services have a limited reach on consumers of non-credible information (Guess, Nyhan, & Reifler, 2018). As such, there is an increasing need to examine the credibility on a broader scale by using computer-assisted processing with human evaluation (Vosoughi, Roy, & Aral, 2018). The web serves as a prolific platform for collaboration and co-creation (Estellés-Arolas & González-Ladrón-de-Guevara, 2012), which has enormous potential to address the issue of online credibility assessment. By considering that a single task could be assigned to a large heterogeneous group, community-based crowdsourcing offers a broader way to evaluate the quality of content published online by exploiting collective competence and judgment (Hammon & Hippner, 2012).
While community-based approaches seem promising (Ishida & Kuraya, 2018), their primary focus has been on assessments of a particular type of content, for instance, images or online news articles. In contrast, the purpose of this article is to conceptualize a crowdsourcing medium, that is, a participatory and co-creative means for evaluating the credibility of any sources online. More specifically, the intended medium would focus on evaluating any source origin, whether it is a primary (e.g., original materials), secondary (e.g., reports of findings contained in primary sources), or tertiary source (e.g., the synthesizing of primary and secondary sources) that could be uploaded, found, or shared online (e.g., artifact, document, photo, video, audio).