Website correlation

Website correlation, or website matching, is a process used to identify websites that are similar or related. Websites are inherently easy to duplicate.[1][2] This led to proliferation of identical websites[3] or very similar websites for purposes ranging from translation to Internet marketing (especially affiliate marketing)[4] to Internet crime[5] Locating similar websites is inherently problematic because they may be in different languages, on different servers, in different countries (different top-level domains).

Uses

Website correlation is used in:

Correlation types

There are several known types of correlation, each demonstrating different strengths and weaknesses. A practical website correlation process may require combining two or more of these methods.

Similar structure

To save time and effort, website owners duplicate major portions of website code across many domains. Similarity of code structure can provide enough information for correlation. Organizations known to have a publicly search-able databases for this kind of correlation include:

note: Websites can sometimes utilize the same structure but have no relationship to each other (as when websites coincidentally utilize the same content management system).

Same server or subnet

Also known as correlated Reverse DNS lookup. Websites may be served from the same server, on one or more ip address, on one or more subnet. Several organizations retain archives of ip address data and correlate the data. Examples include:

note: Correlation via this method may be misleading because websites frequently exist on the same server (aka shared hosting) but have no relationship to each other.

Same owner

Websites may be authored by the same person or organization. Website owners are required to provide contact information to a registrar to obtain a domain name. Domain ownership can be determined via the WHOIS protocol which provides no mechanism for searching or correlating ownership. Several organizations retain archives of WHOIS information and provide searching and correlation services. Examples include:

note: Website ownership information can be falsified, outdated, or hidden from public view. Website Correlation via this method can be accurate, misleading, or impossible depending on the information contained in WHOIS records.

Similar content

Search engines provide search-able databases of indexed website content. Search engine results lists are correlated by content similarity.

Google

note: This method of correlation is inherently slow because one must guess which phrases to search for. Also, related websites may not contain literally similar content (as when a site is translated into another language).

Same category

Websites are frequently categorized or tagged similarly via automated or manual means. Examples of publicly accessible website categorization databases include:

note: Manual Categorization and tag (metadata) methods are inherently subjective.[8] Automated categorization and tagging methods are inherently subject to the varying weaknesses and strengths of underlying categorization algorithms.[9]

Same tracking ID

Tracking IDs, used for analytics or affiliate identification are frequently embedded in website code. These ids can be used for correlation because they imply common management of websites. Publicly available websites for correlating by tracking id include:

References

  1. Search: "website replication", Google
  2. Search: "website clone script", Google
  3. Fetterly, D., Manasse, M., Najork, M., "On the Evolution of Clusters of Near Duplicate Web Pages", Proceedings of the First Conference on Latin American Web Congress, pp. 37,2003
  4. I've Got a Domain Name--Now What???: A Practical Guide to Building a Website and Web Presence, ISBN 1-60005-109-X, 2008
  5. Shane McGlaun, "Microsoft Granted Permanent Ownership of 276 Botnet Domains", Daily Tech,2010/9/9
  6. Investigations Involving the Internet and Computer Networks , National Institute of Justice (U.S.),2007
  7. J Prasanna Kumar,P Govindarajulu,"Duplicate and Near Duplicate Documents Detection: A Review",European Journal of Scientific Research,ISSN 1450-216X Vol.32 No.4 (2009), pp.514-527
  8. Bruce & Wiebe, "Recognizing subjectivity: a case study in manual tagging", Natural Language Engineering, 1999
  9. Fabrizio Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47, 2002.
This article is issued from Wikipedia - version of the 8/27/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.