Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia (Extended Dataset)
Sameer Singh and Amarnag Subramanya and Fernando Pereira and Andrew McCallum

folder wiki-link (109 files)
filefull-content/part2/part2/108.gz 1.50GB
filefull-content/part2/part2/109.gz 1.40GB
filefull-content/part2/part2/105.gz 1.96GB
filefull-content/part2/part2/106.gz 1.61GB
filefull-content/part2/part2/107.gz 1.30GB
filefull-content/part2/part2/103.gz 2.02GB
filefull-content/part2/part2/104.gz 1.89GB
filefull-content/part2/part2/100.gz 1.69GB
filefull-content/part2/part2/101.gz 2.01GB
filefull-content/part2/part2/102.gz 2.09GB
filefull-content/part2/part2/097.gz 1.30GB
filefull-content/part2/part2/098.gz 1.63GB
filefull-content/part2/part2/099.gz 1.96GB
filefull-content/part2/part2/094.gz 2.01GB
filefull-content/part2/part2/095.gz 1.62GB
filefull-content/part2/part2/096.gz 1.45GB
filefull-content/part2/part2/092.gz 2.03GB
filefull-content/part2/part2/093.gz 1.89GB
filefull-content/part2/part2/089.gz 1.72GB
filefull-content/part2/part2/090.gz 2.03GB
filefull-content/part2/part2/091.gz 2.04GB
filefull-content/part2/part2/086.gz 1.06GB
filefull-content/part2/part2/087.gz 1.68GB
filefull-content/part2/part2/088.gz 1.97GB
filefull-content/part2/part2/083.gz 2.09GB
filefull-content/part2/part2/084.gz 1.60GB
filefull-content/part2/part2/085.gz 1.67GB
filefull-content/part2/part2/081.gz 2.02GB
filefull-content/part2/part2/082.gz 1.87GB
filefull-content/part2/part2/078.gz 1.72GB
filefull-content/part2/part2/079.gz 2.00GB
filefull-content/part2/part2/080.gz 2.05GB
filefull-content/part2/part2/076.gz 1.79GB
filefull-content/part2/part2/077.gz 1.93GB
filefull-content/part2/part2/073.gz 1.60GB
filefull-content/part2/part2/074.gz 1.69GB
filefull-content/part2/part2/075.gz 923.11MB
filefull-content/part2/part2/070.gz 2.04GB
filefull-content/part2/part2/071.gz 1.87GB
filefull-content/part2/part2/072.gz 2.11GB
filefull-content/part2/part2/067.gz 1.69GB
filefull-content/part2/part2/068.gz 1.97GB
filefull-content/part2/part2/069.gz 2.07GB
filefull-content/part2/part2/065.gz 1.79GB
filefull-content/part2/part2/066.gz 1.93GB
filefull-content/part2/part2/062.gz 1.57GB
filefull-content/part2/part2/063.gz 1.73GB
filefull-content/part2/part2/064.gz 909.80MB
filefull-content/part2/part2/059.gz 2.05GB
Too many files! Click here to view them all.
Type: Dataset

title= {Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia (Extended Dataset)},
author= {Sameer Singh and Amarnag Subramanya and Fernando Pereira and Andrew McCallum},
abstract= {Cross-document coreference resolution is the task of grouping the entity mentions in a collection of documents into sets that each represent a distinct entity. It is central to knowledge base construction and also useful for joint inference with other NLP components. Obtaining large, organic labeled datasets for training and testing cross-document coreference has previously been difficult. We use a method for automatically gathering massive amounts of naturally-occurring cross-document reference data to create the Wikilinks dataset comprising of 40 million mentions over 3 million entities. Our method is based on finding hyperlinks to Wikipedia from a web crawl and using anchor text as mentions. In addition to providing large-scale labeled data without human effort, we are able to include many styles of text beyond newswire and many entity types beyond people.

### Introduction

 The Wikipedia links (WikiLinks) data consists of web pages that
 satisfy the following two constraints:

a. contain at least one hyperlink that points to Wikipedia, and
b. the anchor text of that hyperlink closely matches the title of the target Wikipedia page.

  We treat each page on Wikipedia as representing an entity
  (or concept or idea), and the anchor text as a mention of the
  entity. The WikiLinks data set was obtained by iterating 
  over Google's web index. 

####  Content

  This dataset is accompanied by the following tech report:

  Please cite the above report if you use this data.

  The dataset is divided over 10 gzipped text files
  data-0000[0-9]-of-00010.gz. Each of these files can be viewed
  without uncompressing them using zcat. For example:

  zcat data-00001-of-00010.gz | head 



  MENTION	vacuum tube	421

  MENTION	vacuum tubes	10838

  MENTION	electron gun	598

  MENTION	fluorescent	790

  MENTION	oscilloscope	1307

  MENTION	computer monitor	1503

  MENTION	computer monitors	3066

  MENTION	radar	1657

  MENTION	plasma screens	2162

  Each file is in the following format:














  where each web-page is identified by its url (annotated
  by "URL"). For every mention (denoted by "MENTION"), we provide the
  actual mention string, the byte offset of the mention from the start
  of the page and the target url all separated by a tab. It is
  possible (and in many cases very likely) that the contents of a
  web-page may change over time. The dataset also contains information
  about the top 10 least frequent tokens on that page at the time
  it was crawled. These line started with a "TOKEN" and contain
  the string of the token and the byte offset from the start of the page.
  These token strings can be used as fingerprints
  to verify if the page used to generate the data has changed. Finally,
  pages are separated from each other by two blank lines.

####  Basic Statistics

  Number of Document: 11 million
  Number of entities:  3 million
  Number of mentions: 40 million

  Finally please note that this dataset was created automatically
  from the web and therefore contains some amount of noise.


  Amar Subramanya (

  Sameer Singh (

  Fernando Pereira (

  Andrew McCallum (
keywords= {},
terms= {},
license= {Attribution 3.0 Unported (CC BY 3.0) Human-Readable Summary}

Send Feedback