Dark Web: A Boon or a Bane

Dark Web: A Boon or a Bane

Punam Bedi, Neha Gupta, Vinita Jindal
Copyright: © 2020 |Pages: 13
DOI: 10.4018/978-1-5225-9715-5.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.
Chapter Preview
Top

Surface Web

The Web provides its users with billions of web pages that can be easily accessed via standard web browsers and search engines. This part of the Web which is openly available to everyone is known as the Surface Web (Santos, 2015). Users access online resources present on Surface Web by either typing a Uniform Resource Locator (URL) in the web browser or submitting a string of keyword(s) to query the search engine. The latter option provides them with an ordered list of search results that are most relevant to their search query. Both these methods allow quick and easy access to information on the Surface Web - also known as Visible Web or Clearnet. With huge amounts of information lying just a few clicks away, the general audience tends to think that all online content hosted on the World Wide Web is part of the Surface Web and can be accessed by them through conventional search engines (like Google and Bing) and web browsers. But this is far from the truth. The Surface Web contains less than 20 percent of the total information present on the Web (Santos, 2015) (Sui, Caverlee, and Rudesill, 2015). This is because the Visible Web is solely formed by the contents that search engines are able to reach on the Web. Any web resource that is beyond the reach of search engines, is not a part of the Visible Web. Thus the volume of content present on the Surface Web is limited by the techniques that search engines follow to extract information from the World Wide Web.

Search engines make use of application programs that scan the World Wide Web to create an index of all “reachable” web resources. These programs are known as crawlers/spiders/harvesters. Crawlers navigate the Web to gather documents and files present online in the form of web pages. Usually, web pages are linked to each other via incoming and outgoing hyperlinks. These hyperlinks enable search engines’ spiders to reach different web pages and extract information from them. This process of gathering information while moving from one web page to another through hyperlinks, is known as crawling. Beginning with an initial set of seed URLs, a crawler scans every web page linked to these URLs via outgoing hyperlinks (Santos, 2015). All online content that is reachable via incoming and/or outgoing hyperlinks gets crawled. The crawled pages are then indexed to allow easy retrieval for later purposes. Finally, when the user submits a search query, the search engines display these indexed web pages in an ordered manner.

Key Terms in this Chapter

Internet: It consists of networks across the world, interconnected together to form one massive global network.

Indexing: The process followed by search engines to create an index of the contents present on the Internet, to allow easy retrieval when displaying results to user queries.

Surface Web: It is the part of the world wide web that can be easily accessed by everyone through the use of standard web browsers and search engines. It is also known as clearnet or visible web.

Bitcoin: It is a kind of digital cryptocurrency that is used for transactions in many black markets of the dark web.

The Onion Router (TOR): It is a free and open source software that allows anonymous communication over the dark web.

Dark Web: The part of world wide web that allows its users to remain anonymous by the use of specialised software to mask their online presence.

Deep Web: The part of the world wide web that cannot be crawled by web crawlers and does not appear in results displayed by search engines.

Crawling: It is the process by which search engines gather online information through crawling robots by moving from one web page to another by the use of hyperlinks.

World Wide Web (WWW): It provides information sharing capability to the Internet through web pages that are hosted online using hyper text transfer protocol (HTTP).

Complete Chapter List

Search this Book:
Reset