HITS algorithm

Hyperlink-Induced Topic Search (HITS; also known as hubs and authorities) is a link analysis algorithm that rates Web pages, developed by Jon Kleinberg. The idea behind Hubs and Authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, certain web pages, known as hubs, served as large directories that were not actually authoritative in the information that they held, but were used as compilations of a broad catalog of information that led users direct to other authoritative pages. In other words, a good hub represented a page that pointed to many other pages, and a good authority represented a page that was linked by many different hubs.[1]

The scheme therefore assigns two scores for each page: its authority, which estimates the value of the content of the page, and its hub value, which estimates the value of its links to other pages.

History

In journals

Formerly, many methods were used for ranking the importance of scientific journals. One such method was Garfield's impact factor. However, many journals such as Science and Nature are filled with numerous citations, making these magazines have very high impact factors. Thus, when comparing two more obscure journals which have received roughly the same number of citations but one of these journals has received many citations from Science and Nature, this journal needs be ranked higher. In other words, it is better to receive citations from an important journal than from an unimportant one.[2]

On the Web

This phenomenon also occurs in the Internet. Counting the number of links to a page can give us a general estimate of its prominence on the Web, but a page with very few incoming links may also be prominent, if two of these links come from the home pages of sites like Yahoo!, Google, or MSN. Because these sites are of very high importance but are also search engines, a page can be ranked much higher than its actual relevance.

Algorithm

Expanding the root set into a base set

In the HITS algorithm, the first step is to retrieve the most relevant pages to the search query. This set is called the root set and can be obtained by taking the top pages returned by a text-based search algorithm. A base set is generated by augmenting the root set with all the web pages that are linked from it and some of the pages that link to it. The web pages in the base set and all hyperlinks among those pages form a focused subgraph. The HITS computation is performed only on this focused subgraph. According to Kleinberg the reason for constructing a base set is to ensure that most (or many) of the strongest authorities are included.

Authority and hub values are defined in terms of one another in a mutual recursion. An authority value is computed as the sum of the scaled hub values that point to that page. A hub value is the sum of the scaled authority values of the pages it points to. Some implementations also consider the relevance of the linked pages.

The algorithm performs a series of iterations, each consisting of two basic steps:

The Hub score and Authority score for a node is calculated with the following algorithm:

HITS, like Page and Brin's PageRank, is an iterative algorithm based on the linkage of the documents on the web. However it does have some major differences:

In detail

To begin the ranking, , and . We consider two types of updates: Authority Update Rule and Hub Update Rule. In order to calculate the hub/authority scores of each node, repeated iterations of the Authority Update Rule and the Hub Update Rule are applied. A k-step application of the Hub-Authority algorithm entails applying for k times first the Authority Update Rule and then the Hub Update Rule.

Authority Update Rule

, we update to be the summation:

where n is the total number of pages connected to p and i is a page connected to p. That is, the Authority score of a page is the sum of all the Hub scores of pages that point to it.

Hub Update Rule

, we update to be the summation:

where n is the total number of pages p connects to and i is a page which p connects to. Thus a page's Hub score is the sum of the Authority scores of all its linking pages

Normalization

The final hub-authority scores of nodes are determined after infinite repetitions of the algorithm. As directly and iteratively applying the Hub Update Rule and Authority Update Rule leads to diverging values, it is necessary to normalize the matrix after every iteration. Thus the values obtained from this process will eventually converge.[3]

Pseudocode

 1 G := set of pages
 2 for each page p in G do
 3   p.auth = 1 // p.auth is the authority score of the page p
 4   p.hub = 1 // p.hub is the hub score of the page p
 5 function HubsAndAuthorities(G)
 6   for step from 1 to k do // run the algorithm for k steps
 7     norm = 0
 8     for each page p in G do  // update all authority values first
 9       p.auth = 0
10       for each page q in p.incomingNeighbors do // p.incomingNeighbors is the set of pages that link to p
11          p.auth += q.hub
12       norm += square(p.auth) // calculate the sum of the squared auth values to normalise
13     norm = sqrt(norm)
14     for each page p in G do  // update the auth scores 
15       p.auth = p.auth / norm  // normalise the auth values
16     norm = 0
17     for each page p in G do  // then update all hub values
18       p.hub = 0
19       for each page r in p.outgoingNeighbors do // p.outgoingNeighbors is the set of pages that p links to
20         p.hub += r.auth
21       norm += square(p.hub) // calculate the sum of the squared hub values to normalise
22     norm = sqrt(norm)
23     for each page p in G do  // then update all hub values
24       p.hub = p.hub / norm   // normalise the hub values

The hub and authority values converge in the pseudocode above.

The code below does not converge, because it is necessary to limit the number of steps that the algorithm runs for. One way to get around this, however, would be to normalize the hub and authority values after each "step" by dividing each authority value by the square root of the sum of the squares of all authority values, and dividing each hub value by the square root of the sum of the squares of all hub values. This is what the pseudocode above does.

Non-converging pseudocode

 1 G := set of pages
 2 for each page p in G do
 3   p.auth = 1 // p.auth is the authority score of the page p
 4   p.hub = 1 // p.hub is the hub score of the page p
 5 function HubsAndAuthorities(G)
 6   for step from 1 to k do // run the algorithm for k steps
 7     for each page p in G do  // update all authority values first
 8       p.auth = 0
 9       for each page q in p.incomingNeighbors do // p.incomingNeighbors is the set of pages that link to p
10         p.auth += q.hub
11     for each page p in G do  // then update all hub values
12       p.hub = 0
13       for each page r in p.outgoingNeighbors do // p.outgoingNeighbors is the set of pages that p links to
14         p.hub += r.auth

References

  1. Christopher D. Manning, Prabhakar Raghavan & Hinrich Schütze (2008). "Introduction to Information Retrieval". Cambridge University Press. Retrieved 2008-11-09.
  2. Kleinberg, Jon (December 1999). "Hubs, Authorities, and Communities". Cornell University. Retrieved 2008-11-09.
  3. von Ahn, Luis (2008-10-19). "Hubs and Authorities" (PDF). 15-396: Science of the Web Course Notes. Carnegie Mellon University. Retrieved 2015-01-19.

External links

This article is issued from Wikipedia - version of the 7/8/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.