Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The PageRank Problem, Multi-Agent Consensus and Web Aggregation -- A Systems and Control Viewpoint (1312.1904v1)

Published 6 Dec 2013 in cs.SY

Abstract: PageRank is an algorithm introduced in 1998 and used by the Google Internet search engine. It assigns a numerical value to each element of a set of hyperlinked documents (that is, web pages) within the World Wide Web with the purpose of measuring the relative importance of the page. The key idea in the algorithm is to give a higher PageRank value to web pages which are visited often by web surfers. On its website, Google describes PageRank as follows: ``PageRank reflects our view of the importance of web pages by considering more than 500 million variables and 2 billion terms. Pages that are considered important receive a higher PageRank and are more likely to appear at the top of the search results." Today PageRank is a paradigmatic problem of great interest in various areas, such as information technology, bibliometrics, biology, and e-commerce, where objects are often ranked in order of importance. This article considers a distributed randomized approach based on techniques from the area of Markov chains using a graph representation consisting of nodes and links. We also outline connections with other problems of current interest to the systems and control community, which include ranking of control journals, consensus of multi-agent systems, and aggregation-based techniques.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Hideaki Ishii (60 papers)
  2. Roberto Tempo (25 papers)
Citations (84)