The Anatomy Of A Large-Scale Hypertextual Web Search Engine



In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.


Sergey Brin, and Lawrence Page


World Wide Web, Search Engines, Information Retrieval, PageRank, Google



Submitted By:


Date Submitted:

Jul 14, 2014



Times Read:


Discussion and Questions

litzer 2 years, 10 months ago
Points: 0

In the introduction, there is mention of "high-quality human maintained indices." What is that referring to? Like people manually hooking up which hyperlink goes where to construct a graph that way?