<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:academictorrents="https://academictorrents.com" version="2.0">
<channel>
<title>Academic Torrents</title>
<description>All Torrents</description>
<link>https://academictorrents.com/</link>
<item>
<title>Wikipedia European languages 2026-03-01</title>
<category>Dataset</category>
<infohash>357aed6775e72b4bac4688497590a262d87d2e2a</infohash>
<guid>https://academictorrents.com/details/357aed6775e72b4bac4688497590a262d87d2e2a</guid>
<link>https://academictorrents.com/details/357aed6775e72b4bac4688497590a262d87d2e2a</link>
<description>Wikipedia database dumps of European language wikis of 10k articles or more. enwiki excluded. Wikipedia Multistream 2026-03-01. These 68 languages are included: Albanian, Alemannic, Aragonese, Asturian, Basque, Bavarian, Belarusian, Benetian, Bosnian, Breton, Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, Emilian-Romagnol, Esperanto, Estonian, Faroese, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Ladin, Latin, Latvian, Ligurian, Limburgish, Lithuanian, Lombard, Low German, Luxembourgish, Macedonian, Maltese, Neapolitan, North Frisian, Norwegian, Nynorsk, Occitan, Piedmontese, Polish, Portuguese, Romanian, Romansh, Rusyn, Samogitian, Scots, Scottish Gaelic, Serbian, Serbo-Croatian, Sicilian, Silesian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Upper Sorbian, Walloon, Welsh, West Frisian, Yiddish.</description>
<size>52792595523</size>
</item><item>
<title>Public MediaWiki Collection</title>
<category>Dataset</category>
<infohash>3ec30d9d8817f62d338ae76783d24ba207b6e9de</infohash>
<guid>https://academictorrents.com/details/3ec30d9d8817f62d338ae76783d24ba207b6e9de</guid>
<link>https://academictorrents.com/details/3ec30d9d8817f62d338ae76783d24ba207b6e9de</link>
<description># Dataset Card for Public MediaWiki Collection ### Dataset Summary This dataset contains 1,662,448 articles harvested from 930 random public MediaWiki instances found across the Internet. The collection was created by extracting current page content from these wikis, preserving article text, metadata, and structural information. The dataset represents a diverse cross-section of public wiki content spanning multiple domains, topics, and languages. ### Languages The dataset is multilingual, covering 35+ languages found across the collected wiki instances. ## Dataset Structure ### Data Fields This dataset includes the following fields: -  id : Unique identifier for the article (string) -  title : Title of the article (string) -  text : Main content of the article (string) -  metadata : Dictionary containing: -  templates : List of templates used in the article -  categories : List of categories the article belongs to -  wikilinks : List of internal wiki links and their text -  external_links : List of external links -  sections : List of section titles and their levels ### Data Splits All examples are in a single split. ## Additional Information ### License This dataset is released under the Creative Commons Attribution-ShareAlike 4.0 International (CC-BY-SA 4.0) license, consistent with the licensing of the source MediaWiki instances. To learn more about CC-BY-SA 4.0, visit: https://creativecommons.org/licenses/by-sa/4.0/</description>
<size>1167900670</size>
</item><item>
<title>9111.ru Questions Dataset</title>
<category>Dataset</category>
<infohash>3fa77d9c4028fd6aa8a6dbdad67a218fc1ad7a5d</infohash>
<guid>https://academictorrents.com/details/3fa77d9c4028fd6aa8a6dbdad67a218fc1ad7a5d</guid>
<link>https://academictorrents.com/details/3fa77d9c4028fd6aa8a6dbdad67a218fc1ad7a5d</link>
<description># Dataset Card for 9111.ru Questions ### Dataset Summary This dataset includes legal questions and answers from the Russian law forum [9111.ru](https://9111.ru). It contains inquiries from users and corresponding responses from lawyers. The dataset was created by processing around 21 million questions, providing a significant corpus of legal discussions. ### Languages The dataset is mostly in Russian, but there may be other languages present. ## Dataset Structure ### Data Fields This dataset includes the following fields: -  id : Identifier for the item (integer) -  title : Title of the question (string) -  description : Description of the question (string) -  answers : An array of answer objects (array) -  user_name : Name of the user who answered (string) -  status : Status of the user (string) -  rating : Rating of the user (integer) -  text : Text of the answer (string) ### Data Format The dataset is stored as Apache Parquet files with zstd compression (level 19), split into 3 shards: -  questions-00000-of-00003.parquet  -  questions-00001-of-00003.parquet  -  questions-00002-of-00003.parquet  ### Data Splits All examples are in the train split, there is no validation split.</description>
<size>2938274461</size>
</item><item>
<title>Fandom.com Community Database Dumps Dataset</title>
<category>Dataset</category>
<infohash>0a0ad3dd44e05af1725fd8d17f5aeba856078d5f</infohash>
<guid>https://academictorrents.com/details/0a0ad3dd44e05af1725fd8d17f5aeba856078d5f</guid>
<link>https://academictorrents.com/details/0a0ad3dd44e05af1725fd8d17f5aeba856078d5f</link>
<description># Dataset Card for Fandom.com Community Database Dumps ### Dataset Summary This dataset contains 7,040,984 current pages from all available [Fandom.com community wiki dumps](https://community.fandom.com/wiki/Help:Database_download) as of February 18, 2025. The dataset was created by processing the "Current pages" database dumps from all available Fandom.com wikis. These dumps contain only the current versions of pages without edit history and includes article text, metadata, and structural information across multiple languages. ### Languages The dataset is multilingual, covering [40+ languages](https://community.fandom.com/wiki/Help:Language_codes). ## Dataset Structure ### Data Fields This dataset includes the following fields: -  id : Unique identifier for the article (string) -  title : Title of the article (string) -  text : Main content of the article (string) -  metadata : Dictionary containing: -  templates : List of templates used in the article -  categories : List of categories the article belongs to -  wikilinks : List of internal wiki links and their text -  external_links : List of external links -  sections : List of section titles and their levels ### Data Splits All examples are in a single split. ## Additional Information ### License This dataset inherits the licenses from the source Fandom communities, which use Creative Commons Attribution-ShareAlike 3.0 (CC-BY-SA 3.0). To learn more about CC-BY-SA 3.0, visit: https://creativecommons.org/licenses/by-sa/3.0/</description>
<size>6224651822</size>
</item><item>
<title>ca-on_province_of_ontario-2024A000235_drape_eastern_ontario_orthoimagery_2024_16cm_v0.1.0-beta.pmtiles</title>
<category>Dataset</category>
<infohash>adb5741cdcb9352848cc80c976629b44720a04c2</infohash>
<guid>https://academictorrents.com/details/adb5741cdcb9352848cc80c976629b44720a04c2</guid>
<link>https://academictorrents.com/details/adb5741cdcb9352848cc80c976629b44720a04c2</link>
<description>High‑resolution aerial imagery from Ontario’s DRAPE 2024, packaged for fast web maps and offline use. Smooth panning, crisp detail, open data. Want to preview the file? Go to https://source.coop/dataforcanada/d4c-datapkg-orthoimagery/processed/ca-on_province_of_ontario-2024A000235_drape_eastern_ontario_orthoimagery_2024_16cm_v0.1.0-beta.pmtiles We are aware that there is a nodata issue with the product and will fix it in the next release.</description>
<size>215750229448</size>
</item><item>
<title>Street-Level Imagery Dataset</title>
<category>Dataset</category>
<infohash>207ba45161f6ba12114cb6d97ad25d222d5125c9</infohash>
<guid>https://academictorrents.com/details/207ba45161f6ba12114cb6d97ad25d222d5125c9</guid>
<link>https://academictorrents.com/details/207ba45161f6ba12114cb6d97ad25d222d5125c9</link>
<description># Street-Level Imagery Dataset Metadata for street-level imagery across Eastern Europe and Northern Asia. Each record includes image URLs, coordinates, camera orientation, timestamps, and links to similar images captured at the same location over time. ## Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | Total Records | 934,191 | | Unique Images | 905,940 | | Time Span | 2016–2025 | | File Format | Parquet | ## Geographic Coverage | Boundary | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;-| | Minimum Longitude | 20.49° E | | Maximum Longitude | 152.32° E | | Minimum Latitude | 38.55° N | | Maximum Latitude | 69.05° N | Coverage spans urban centers and rural routes. Density is higher in populated areas. ## Camera Specifications **Directions** | Direction | Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | Front | 740,079 | | Right | 194,112 | **Resolutions** | Preview Size | Full Size | Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | 284×160 | 1920×1080 | 932,171 | | 90×160 | 1080×1920 | 1,886 | | 284×160 | 1536×864 | 77 | | 213×160 | 2016×1512 | 41 | ## Data Structure ### Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  id  | string | Unique image identifier | |  sourceId  | string | Source device or session identifier | |  heading  | float64 | Camera heading (0–360°) | |  cameraDirection  | string | Mount position ( front  or  right ) | |  timestamp  | string | ISO 8601 capture time | |  imagePreview  | struct | Thumbnail URL and dimensions | |  imageFull  | struct | Full resolution URL and dimensions | |  pos  | array | [longitude, latitude] | |  geometry  | struct | GeoJSON Point geometry | |  similar  | array | Related images at location | |  targetGeometry  | struct | Optional target reference | ### Image URL Schema Two resolution variants per entry:    json  "imagePreview":  "url": "https://...", "width": 284, "height": 160 , "imageFull":  "url": "https://...", "width": 1920, "height": 1080       ### Temporal Links Records reference similar images from other timestamps at the same coordinates. Average of 14.3 links per location. ## Limitations Image URLs may [rot](https://en.wikipedia.org/wiki/Link_rot). Coverage concentrates in urban areas, and historical density varies by location. ## License Research use permitted. Comply with source terms of service and local data regulations.</description>
<size>651707506</size>
</item><item>
<title>Subreddit comments/submissions 2005-06 to 2025-12</title>
<category>Dataset</category>
<infohash>3e3f64dee22dc304cdd2546254ca1f8e8ae542b4</infohash>
<guid>https://academictorrents.com/details/3e3f64dee22dc304cdd2546254ca1f8e8ae542b4</guid>
<link>https://academictorrents.com/details/3e3f64dee22dc304cdd2546254ca1f8e8ae542b4</link>
<description>This is the top 40,000 subreddits from reddit s history in separate files. You can use your torrent client to only download the subreddit s you re interested in. These are from the pushshift dumps from 2005-06 to 2025-12 which can be found here https://academictorrents.com/details/3d426c47c767d40f82c7ef0f47c3acacedd2bf44 These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps If you have questions, please reply to this reddit post or DM u/Watchful on reddit or respond to this post https://www.reddit.com/r/pushshift/comments/1r5z42j/comment/o5mjcvn/</description>
<size>3965777405390</size>
</item><item>
<title>Lung Ultrasound Dataset (LUS-Dataset-Katumba)</title>
<category>Dataset</category>
<infohash>e6e9a5594174aaffee53b8f086e3bf86c02c45ad</infohash>
<guid>https://academictorrents.com/details/e6e9a5594174aaffee53b8f086e3bf86c02c45ad</guid>
<link>https://academictorrents.com/details/e6e9a5594174aaffee53b8f086e3bf86c02c45ad</link>
<description>This dataset contains a curated benchmark collection of 1,062 labelled lung ultrasound (LUS) images collected from patients at Mulago National Referral Hospital and Kiruddu Referral Hospital in Kampala, Uganda. The images were acquired and annotated by senior radiologists to support the development and evaluation of artificial intelligence (AI) models for pulmonary disease diagnosis. Each image is categorized into one of three classes: Probably COVID-19 (COVID-19), Diseased Lung but Probably Not COVID-19 (Other Lung Disease), and Healthy Lung. The dataset addresses key challenges in LUS interpretation, including inter-operator variability, low signal-to-noise ratios, and reliance on expert sonographers. It is particularly suitable for training and testing convolutional neural network (CNN)-based models for medical image classification tasks in low-resource settings. The images are provided in standard formats such as PNG or JPEG, with corresponding labels stored in structured files like CSV or JSON to facilitate ease of use in machine learning workflows. In this second version of the dataset, we have extended the resource by including a folder containing the original unprocessed raw data, as well as the scripts used to process, clean, and sort the data into the final labelled set. These additions promote transparency and reproducibility, allowing researchers to understand the full data pipeline and adapt it for their own applications. This resource is intended to advance research in deep learning for lung ultrasound analysis and to contribute toward building more accessible and reliable diagnostic tools in global health.     Katumba, Andrew; Murindanyi, Sudi; Okila, Nixson; Nakatumba-Nabende, Joyce; Mwikirize, Cosmas; Serugunda, Jonathan; Bugeza, Samuel; Oriekot, Anthony; Bosa, Juliet; Nabawanuka, Eva (2025), “A Dataset of Lung Ultrasound Images for Automated AI-based Lung Disease Classification”, Mendeley Data, V2, doi: 10.17632/hb3p34ytvx.2</description>
<size>281447804</size>
</item><item>
<title>Wikipedia Asian languages 2026-02-01</title>
<category>Dataset</category>
<infohash>1bde58b51e4aad60f03ce1b688b691552fb3041e</infohash>
<guid>https://academictorrents.com/details/1bde58b51e4aad60f03ce1b688b691552fb3041e</guid>
<link>https://academictorrents.com/details/1bde58b51e4aad60f03ce1b688b691552fb3041e</link>
<description>Wikipedia database dumps of Asian language wikis of 10k articles or more. Wikipedia Multistream 2026-02-01. These 85 languages are included: Acehnese, Armenian, Assamese, Azerbaijani, Balinese, Bangla, Banjar, Banyumasan, Bashkir, Bishnupriya, Buginese, Burmese, Cantonese, Cebuno, Central Bikol, Central Kurdhish, Chechen, Chinese, Chuvash, Classical Chinese, Dimli, Eastern Mari, Georgian, Gilaki, Gorontalo, Gujarati, Hakka, Hebrew, Hindi, Iloko, Indonesian, Japanese, Javanese, Kannada, Kara-Kalpak, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Maithili, Malay, Malayalam, Manipuri, Marathi, Mazanderani, Minangkabau, Mindong, Mingrelian, Minnan, Mongolian, Nepali, Newari, Odia, Ossetic, Pampangan, Pashto, Persian, Punjabi, Russian, Sanskrit, Santali, Saraiki, Shan, Sindhi, Sinhala, South Azerbaijani, Sundanese, Tagalog, Tajik, Talysh, Tamil, Tatar, Telugu, Thai, Turkish, Urdu, Uzbek, Vietnamese, Waray, Western Armenian, Western Mari, Western Punjabi, Wu, Yakut.</description>
<size>31208780863</size>
</item><item>
<title>Reddit comments/submissions 2026-01</title>
<category>Dataset</category>
<infohash>8412b89151101d88c915334c45d9c223169a1a60</infohash>
<guid>https://academictorrents.com/details/8412b89151101d88c915334c45d9c223169a1a60</guid>
<link>https://academictorrents.com/details/8412b89151101d88c915334c45d9c223169a1a60</link>
<description>Reddit comments and submisReddit comments and submissions from 2026-01 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumpssions from 2026-01</description>
<size>61629104259</size>
</item><item>
<title>Begemot.ai Dataset</title>
<category>Dataset</category>
<infohash>3ada9903be4621cf7e34cd5cf44f191b4124ccfe</infohash>
<guid>https://academictorrents.com/details/3ada9903be4621cf7e34cd5cf44f191b4124ccfe</guid>
<link>https://academictorrents.com/details/3ada9903be4621cf7e34cd5cf44f191b4124ccfe</link>
<description># Dataset Card for Begemot.ai ### Dataset Summary This dataset has 2,728,999 educational project descriptions in Russian. They were generated using AI on the Begemot.ai website. The content includes project titles, descriptions, chapters and chapter content on various educational topics. ### Languages The dataset is primarily in Russian (ru). ## Dataset Structure ### Data Fields This dataset includes the following fields: -  id : Unique identifier for the project (integer) -  url : URL of the project page (string) -  title : Title of the educational project (string) -  type : Type of project (string) -  description : Detailed description of the project (string) -  chapters : List of chapter titles (list of strings) -  chapter_content : JSON string mapping chapter titles to their content ### Data Splits All examples are in a single split.</description>
<size>1708074039</size>
</item><item>
<title>OpenPOCUS - Lung Ultrasound Image Database</title>
<category>Dataset</category>
<infohash>63ad0470f43e022cc73407be9c760449d947cb97</infohash>
<guid>https://academictorrents.com/details/63ad0470f43e022cc73407be9c760449d947cb97</guid>
<link>https://academictorrents.com/details/63ad0470f43e022cc73407be9c760449d947cb97</link>
<description>https://i.imgur.com/s0eFv64.png Background Lung ultrasound (LUS) offers advantages over traditional imaging for diagnosing pulmonary conditions, with superior accuracy compared to chest X-ray and similar performance to CT at lower cost. Despite these benefits, widespread adoption is limited by operator dependency, moderate interrater reliability, and training requirements. Deep learning (DL) could potentially address these challenges, but development of effective algorithms is hindered by the scarcity of comprehensive image repositories with proper metadata.</description>
<size>5256546243</size>
</item><item>
<title>Russian Educational Text Collection</title>
<category>Dataset</category>
<infohash>1f6b373346a0fa34de6b4d916984d698e0a623b3</infohash>
<guid>https://academictorrents.com/details/1f6b373346a0fa34de6b4d916984d698e0a623b3</guid>
<link>https://academictorrents.com/details/1f6b373346a0fa34de6b4d916984d698e0a623b3</link>
<description># Dataset Card for Russian Educational Text Collection ### Dataset Summary This dataset contains approximately 1.38M educational texts primarily in Russian with some content in Ukrainian and English. The content is extracted from presentations and documents, including educational presentations, essays, and various academic documents covering diverse topics from natural sciences to literature. ### Languages - Russian (ru) - primary language - Ukrainian (uk) - secondary language - English (en) - secondary language With Russian being the predominant language in the dataset, while Ukrainian and English content appears less frequently. ## Dataset Structure ### Data Fields The dataset is split into two parquet files: - presentations (1,335,171 entries): -  title : Title of the presentation (string) -  slide_text : Array of slide contents (list of strings) - documents (47,474 entries): -  title : Title of the document (string) -  document_text : Full text content of the document (string) ## Additional Information ### License This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can: * Use it for any purpose, including commercial projects * Modify it however you like * Distribute it without asking permission No attribution is required, but it s always appreciated!</description>
<size>304218686</size>
</item><item>
<title>Animations Dataset</title>
<category>Dataset</category>
<infohash>8799f1e66bf0a63a77e89a7917fbe281c13bcd9f</infohash>
<guid>https://academictorrents.com/details/8799f1e66bf0a63a77e89a7917fbe281c13bcd9f</guid>
<link>https://academictorrents.com/details/8799f1e66bf0a63a77e89a7917fbe281c13bcd9f</link>
<description># Dataset Card for Animations Dataset ### Dataset Summary This dataset contains 50,849 animations with their associated metadata and source images. Each animation consists of multiple frames composed of simple sketch-level drawings, text elements, and potentially embedded images. The dataset provides complete information about each animation, including frame components, source images, timing between frames, and canvas settings. This makes it suitable for various tasks such as animation analysis, generation, and modification. ### Languages The dataset is primarily monolingual: - English (en): Any text elements within animations are predominantly in English. ## Dataset Structure ### Data Files The dataset is stored as Parquet files with ZSTD compression: -  train-00000.parquet  through  train-00003.parquet  - Total: 4 shards, ~4.2 GB compressed ### Data Fields Each row in the Parquet files contains the following columns: | Column | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  id  |  string  | Unique identifier (UUID) for the animation | |  settings  |  string  | JSON object containing canvas configuration | |  dtimes  |  list[int64]  | Time delays between frames in milliseconds | |  frames_data  |  string  | JSON array describing each frame s elements | |  images  |  list[binary]  | PNG images used in the animation (decoded bytes) | #### Settings Object The  settings  JSON contains: -  canvas_width ,  canvas_height : Dimensions of the animation canvas -  fillcolor : Background color of the canvas (if specified) -  default_font : Default font used for text elements -  default_font_size : Default font size #### Frames Data Structure The  frames_data  JSON is an array of arrays, where each inner array represents a frame s elements: -  type_for_loader : Element type (e.g., "text", "image") -  data : Object containing element properties: -  type : Element type -  centerx ,  centery : Position coordinates on the canvas -  text : Text content (for text elements) -  font ,  size : Font properties -  rotate_angle ,  angle : Rotation properties -  strokeColor ,  fillColor ,  textColor : Color properties -  src : Index into the  images  array (for image elements) -  children_data : Array of child elements (if any) ### Data Splits | Split | Number of Examples | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| |  train  | 50,849 |</description>
<size>4374676620</size>
</item><item>
<title>Sprite Compositing &amp; Animation Dataset</title>
<category>Dataset</category>
<infohash>df2a3742526f44dac4dbb80299333e84132c5b45</infohash>
<guid>https://academictorrents.com/details/df2a3742526f44dac4dbb80299333e84132c5b45</guid>
<link>https://academictorrents.com/details/df2a3742526f44dac4dbb80299333e84132c5b45</link>
<description># Sprite Compositing &amp; Animation Dataset A diverse dataset of image sequences with their source sprite assets. Contains animations, slideshows, and composited scenes created from transparent PNG sprites with additional effects and overlays. ## Dataset Statistics | Metric | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;-| | Total animations | 50,849 | | Total frames | 1,191,969 | | Total source sprites | 312,926 | | Avg frames/animation | 23.4 | | Avg sprites/animation | 6.2 | | Frame resolution | 800 x 450-600 | | Sprite resolution | Variable | | Total size | ~27 GB | | Format | Parquet (ZSTD compressed) | ## Schema | Column | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  id  |  string  | Unique identifier (UUID) | |  source_images  |  list[binary]  | PNG bytes of source sprite assets (RGBA), sorted by index. Can be empty. | |  frames  |  list[binary]  | PNG bytes of final rendered frames (RGB 800x450-600), sorted by sequence order | |  num_sources  |  int64  | Number of source sprites (0 for rows without source assets) | |  num_frames  |  int64  | Number of frames in the sequence | ## Data Structure Each sample contains: 1. **Source Images** ( source_images ): Transparent PNG sprites/assets (RGBA mode) used to compose the final frames. Variable sizes. May include characters, objects, etc. 2. **Frames** ( frames ): Final rendered image sequence (RGB mode, typically 800x450-600). These are the result of compositing source sprites with additional effects like: - Text overlays - Drawings and sketches - Backgrounds - Animations and transitions - Visual effects ### Content Variations - **Animations**: Smooth frame-by-frame animations of sprites (e.g., character movement) - **Slideshows**: Discrete scene transitions using source assets - **Composited Scenes**: Source sprites combined with text, drawings, and effects - **Sketches**: Hand-drawn or illustrated frames with optional sprite references **Note**: Not all frames are strict animations - many are slideshows or scene compositions where source assets are combined with additional elements.</description>
<size>27130846899</size>
</item><item>
<title>NNTP Discussion Archives</title>
<category>Dataset</category>
<infohash>cac053d01e256ae3001bf40c5c98eefa86cdc870</infohash>
<guid>https://academictorrents.com/details/cac053d01e256ae3001bf40c5c98eefa86cdc870</guid>
<link>https://academictorrents.com/details/cac053d01e256ae3001bf40c5c98eefa86cdc870</link>
<description># NNTP Discussion Archives A large-scale collection of text discussions from public NNTP (Network News Transfer Protocol) newsgroups spanning over two decades. ## Dataset Statistics | Metric | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;-| | Total messages | 386,629,949 | | Unique newsgroups | 159,345 | | Date range | 2002 - 2026 | | Total size | ~191 GB (compressed) | | File format | Parquet (ZSTD) | | Number of files | 256 | | Average content length | ~1,400 characters | ## Schema | Column | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  message_id  |  string  | Original message identifier (unchanged) | |  newsgroups  |  string  | Target newsgroup(s), comma-separated if cross-posted | |  author  |  string  | Message author with email addresses redacted as  [email]  | |  subject  |  string  | Subject line | |  date  |  string  | RFC 2822 formatted date string | |  content  |  string  | Message body with email addresses redacted as  [email]  | ## Top Newsgroups by Volume | Newsgroup | Messages | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | alt.atheism | 5,658,023 | | free.usenet | 4,691,561 | | alt.fan.rush-limbaugh | 4,659,639 | | alt.politics | 3,919,772 | | fr.soc.politique | 3,554,434 | | it.sport.calcio.milan | 2,961,804 | | it.politica | 2,802,687 | | alt.politics.bush | 2,786,316 | | talk.politics.misc | 2,784,668 | | Other (159,336 groups) | 475,430,274 | *Cross-posted messages are counted once per newsgroup, so totals exceed the 386M unique messages.* ## Data Processing **Filtering:** Binary-focused groups (*.binaries.*, *.pictures.*, *.multimedia.*), binary posts with file-sharing indicators, messages exceeding 500KB, and unrecoverable encoding errors are excluded. Spam is almost **not filtered** - the dataset includes advertisements, phishing, and low-quality posts present in raw newsgroups. **Encoding:** Messages are normalized to UTF-8 with the following decoding pipeline: - Quoted-Printable: MIME-encoded content decoded to text - Base64: Text base64 content decoded; binary base64 excluded - Legacy encodings: Invalid UTF-8 sequences re-encoded using Windows-1252, ISO-8859-*, KOI8-R, Shift-JIS, GBK, and other legacy encoding detection - MIME encoded-word headers decoded to UTF-8 **Deduplication:** Exact content duplicates removed via xxHash64 hashing (first occurrence retained). **Privacy:** Email addresses in  author  and  content  fields redacted as  [email] ;  message_id  unchanged. ## Considerations - Messages were posted to public newsgroups - Content reflects unmoderated discussions and may contain controversial opinions</description>
<size>204065504201</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2025-12</title>
<category>Dataset</category>
<infohash>3d426c47c767d40f82c7ef0f47c3acacedd2bf44</infohash>
<guid>https://academictorrents.com/details/3d426c47c767d40f82c7ef0f47c3acacedd2bf44</guid>
<link>https://academictorrents.com/details/3d426c47c767d40f82c7ef0f47c3acacedd2bf44</link>
<description>Reddit comments and submissions from 2005-06 to 2025-12 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps The more recent dumps are collected by u/RaiderBDev</description>
<size>3804096351995</size>
</item><item>
<title>RU-OK? Uptime measurements of Russian/Belarusian DDoS targets of IT ARMY</title>
<category>Dataset</category>
<infohash>87b8ba53e3f7d58ac3845f1be81f49682c2c68f2</infohash>
<guid>https://academictorrents.com/details/87b8ba53e3f7d58ac3845f1be81f49682c2c68f2</guid>
<link>https://academictorrents.com/details/87b8ba53e3f7d58ac3845f1be81f49682c2c68f2</link>
<description>In 2022, Russia began a full-scale invasion of Ukraine in the escalating Russo-Ukrainian war. Ukrainian ingenuity quickly led to the creation of a volunteer cyberwarfare organization, [IT Army of Ukraine](https://en.wikipedia.org/wiki/IT_Army_of_Ukraine), which conducted both defensive and offensive operations. Notably, they invited anyone with an internet connection to DDoS an ever-growing list of Russian and Belarusian websites, with the goal of disrupting infrastructure and draining Russia’s own cyberwarfare capabilities. I made a very quick project to assess the status of Russian and Belarusian internet properties (via [RIPE Atlas](https://atlas.ripe.net/)) being targeted by hacktivists. Specifically, I evaluated almost every target listed by the IT ARMY Telegram group with many unique probes between 2022-02-27 (the day after IT ARMY was created) and 2022-05-30 to check for service availability. I wanted to check connectivity from within Russia’s borders because I saw many mixed reports across Twitter and Reddit, with international parties (Americans, Ukrainians, etc.) claiming many sites had been knocked offline, where Russians chimed in that many sites remained online for them. The truth is more complex - some sites were significantly disrupted and took time to recover glovally, while others had existing mitigations in place, others seemed to deprioritize or sinkhole international traffic, etc. This research was included in several news articles around the world: * Ukraine’s IT army is doing well, hitting Russia with ‘cost and chaos’ - [VentureBeat](https://venturebeat.com/2022/03/04/ukraines-it-army-is-doing-well-hitting-russia-with-cost-and-chaos/) * Ukraine deserves an IT army. We have to live with the fallout - [VentureBeat](https://venturebeat.com/2022/03/04/ukraine-deserves-an-it-army-we-have-to-live-with-the-fallout/) * Ukraine: We’ve repelled ‘nonstop’ DDoS attacks from Russia - [VentureBeat](https://venturebeat.com/2022/03/07/ukraine-weve-repelled-nonstop-ddos-attacks-from-russia/) * Guerre en Ukraine : les cyberattaques contre la Russie, le « cri de colère » d’une armée de volontaires - [Le Monde](https://www.lemonde.fr/pixels/article/2022/03/25/guerre-en-ukraine-face-a-la-russie-les-cyberattaques-en-forme-de-cri-de-colere-d-une-armee-de-volontaires_6119064_4408996.html) * Ukraine Demanded Cloudflare Stop Protecting Russians From Cyberattacks. Cloudflare Said No - [Forbes](https://www.forbes.com/sites/thomasbrewster/2022/03/07/cloudflare-rejects-ukraines-call-to-stop-protecting-russians-from-cyberattacks/) The data and methodology for RU-OK was originally published on my GitHub, where I hope it will remain. However, I’ve received the occasional nastygram about this research and recently received a takedown request against from a Russian cybersecurity firm, claiming that sensitive information is being stored in my repository. There isn’t, of course, and all the data is public measurements against public endpoints. Still I’m concerned that fraudulent reports could result in my repo getting deleted, so I’m creating a censorship-resistant copy and distributing it on my blog and on Academic Torrents. It’s long overdue anyway. I encourage anyone curious to take a dig through the data, as you can watch both the immediate impact of DDoS attacks as well as Russian government and company resilience change over several months as this attack became commonplace.</description>
<size>1609147173</size>
</item><item>
<title>Wikipedia Wikidata 2026-01-01</title>
<category>Dataset</category>
<infohash>91f29e60cc4a65747a346109ef49a48808c6a2cd</infohash>
<guid>https://academictorrents.com/details/91f29e60cc4a65747a346109ef49a48808c6a2cd</guid>
<link>https://academictorrents.com/details/91f29e60cc4a65747a346109ef49a48808c6a2cd</link>
<description>Database dump of the Wikidata wiki. Wikipedia Multistream 2026-01-01.</description>
<size>175980564292</size>
</item><item>
<title>Wikipedia Commons 2026-01-01</title>
<category>Dataset</category>
<infohash>83f2cfd35db16f696000bd3dee56e3837fe3e60c</infohash>
<guid>https://academictorrents.com/details/83f2cfd35db16f696000bd3dee56e3837fe3e60c</guid>
<link>https://academictorrents.com/details/83f2cfd35db16f696000bd3dee56e3837fe3e60c</link>
<description>Database dump of Wikipedia Commons. Wikipedia Multistream 2026-01-01.</description>
<size>106644000008</size>
</item><item>
<title>Reddit comments/submissions 2025-12</title>
<category>Dataset</category>
<infohash>481bf2eac43172ae724fd6c75dbcb8e27de77734</infohash>
<guid>https://academictorrents.com/details/481bf2eac43172ae724fd6c75dbcb8e27de77734</guid>
<link>https://academictorrents.com/details/481bf2eac43172ae724fd6c75dbcb8e27de77734</link>
<description>Reddit comments and submissions from 2025-12 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>56418645823</size>
</item><item>
<title>GitGud Code Dataset</title>
<category>Dataset</category>
<infohash>221571632238b826f0aa6ec4f370af633575cae4</infohash>
<guid>https://academictorrents.com/details/221571632238b826f0aa6ec4f370af633575cae4</guid>
<link>https://academictorrents.com/details/221571632238b826f0aa6ec4f370af633575cae4</link>
<description># GitGud Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [GitGud.io](https://gitgud.io), a GitLab-based code hosting platform. GitGud.io serves as an alternative git hosting service used by various developer communities and open-source projects. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 16,322,315 | | **Total Repositories** | 7,204 | | **Total Size** | 17.46 GB (compressed Parquet) | | **Programming Languages** | 2,185 | | **File Format** | Parquet with Zstd compression (17 files) | ### Key Features - **Diverse code corpus**: Contains code from over 7,000 repositories across various domains - **Wide language coverage**: Spans 2,185 programming languages and file types detected by file extension mapping - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size - **Quality filtered**: Filtering applied to remove binary files, overly long lines, and license files ### Languages The dataset includes 2,185 programming languages and file types. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | tw (Twine) | 3,301,366 | | 2 | XML | 3,281,566 | | 3 | svg | 1,744,500 | | 4 | C# | 1,367,799 | | 5 | JavaScript | 1,252,710 | | 6 | C++ | 731,619 | | 7 | erb | 710,279 | | 8 | JSON | 398,139 | | 9 | Text | 377,948 | | 10 | twee | 300,576 | | 11 | csv | 205,230 | | 12 | HTML | 170,711 | | 13 | Markdown | 160,735 | | 14 | TypeScript | 147,173 | | 15 | Lua | 117,079 | | 16 | PHP | 116,059 | | 17 | none | 111,791 | | 18 | pal | 110,626 | | 19 | CSS | 108,664 | | 20 | Python | 106,261 | | 21 | dm | 98,333 | | 22 | Ruby | 93,685 | | 23 | _comment | 91,730 | | 24 | Java | 81,190 | | 25 | YAML | 63,289 | | 26 | ActionScript | 62,210 | | 27 | Git | 43,748 | | 28 | mdwn | 42,654 | | 29 | mk | 41,789 | | 30 | INI | 39,760 | ### Licenses The dataset includes files from repositories with various licenses: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | mit | 9,517,343 | | bsd-3-clause | 3,315,732 | | unknown | 2,935,736 | | mpl-2.0 | 338,040 | | gpl-2.0 | 79,415 | | lgpl-2.1 | 38,429 | | gpl-3.0 | 25,964 | | apache-2.0 | 20,562 | | cc-by-4.0 | 18,703 | | agpl-3.0 | 15,367 | | cc-by-nc-4.0 | 6,362 | | wtfpl | 6,163 | | bsd-2-clause | 3,749 | | zlib | 482 | | unlicense | 261 | | cc-by-sa-4.0 | 7 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the GitGud repository (format:  username/repo ) | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language detected by file extension mapping | |  license  | string | License of the repository (SPDX identifier or "unknown") | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression (level 19) - **File Structure**: 17 files ( gitgud-00000.parquet  to  gitgud-00016.parquet ) - **Rows per shard**: ~1,000,000 (except last shard: 322,315) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       code :  using System;nusing System.Collections.Generic;n... ,  repo_name :  username/game-mod ,  path :  src/GameMod/Player.cs ,  language :  C# ,  license :  mit ,  size : 2048      ## Dataset Creation ### Pipeline Overview The dataset was created through a multi-stage pipeline: 1. **Repository Discovery**: Scraping public repository URLs from GitGud.io s GitLab API v4 endpoint using multiple sort orderings ( id ,  name ,  path ,  updated_at ,  star_count ,  last_activity_at ,  similarity ) 2. **Branch Enumeration**: Fetching all branches for each repository via the GitLab API 3. **Archive Download**: Downloading  .tar.gz  archives for each repository/branch combination 4. **Content Extraction**: Extracting and filtering source code files from archives 5. **Parquet Generation**: Writing filtered records to Parquet shards with Zstd compression ### Language Detection Programming languages are detected using file extension mapping. The pipeline maps ~80 programming languages by their file extensions, including: - **Major languages**: Python, JavaScript, TypeScript, C, C++, C#, Java, Go, Rust, Ruby, PHP - **Configuration**: JSON, YAML, TOML, XML, INI - **Markup**: HTML, CSS, Markdown, LaTeX - **Game development**: GLSL, HLSL, GDScript - **And many more** Files with unrecognized extensions are labeled with the extension itself (without the dot prefix). Files without extensions are labeled as "none" or by special filename matching (e.g., "Dockerfile", "Makefile"). ### License Detection Licenses are detected by: 1. Scanning for license files ( LICENSE ,  LICENSE.txt ,  LICENSE.md ,  COPYING ,  COPYING.txt ,  COPYING.md ) 2. Matching license text against known patterns (MIT, Apache 2.0, GPL variants, BSD, Creative Commons, MPL, ISC, Unlicense, Artistic, WTFPL, Zlib, etc.) 3. Defaulting to "unknown" if no license can be detected ### File Filtering Filtering is applied to ensure data quality: #### Size Limits | Limit | Value | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | Max repository archive size | 64 MB | | Max line length | 1,000 characters | #### Content Filtering - **Binary Detection**: Files with null bytes in the first 1KB are excluded - **UTF-8 Validation**: Files must be decodable as UTF-8 (with fallback to latin-1, cp1252, iso-8859-1) - **Long Lines**: Files with any line exceeding 1,000 characters are excluded - **License Files**: License files (LICENSE, COPYING, etc.) are excluded from the dataset (but used for license detection) ### Source Data All data originates from public repositories hosted on [GitGud.io](https://gitgud.io). ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. The license field in each data point indicates the license of the source repository.</description>
<size>18751337953</size>
</item><item>
<title>Mos.Hub Code Dataset</title>
<category>Dataset</category>
<infohash>991f0d7eaa11bfda7f08e9bd82466458982cd430</infohash>
<guid>https://academictorrents.com/details/991f0d7eaa11bfda7f08e9bd82466458982cd430</guid>
<link>https://academictorrents.com/details/991f0d7eaa11bfda7f08e9bd82466458982cd430</link>
<description># Mos.Hub Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [Mos.Hub](https://hub.mos.ru) (hub.mos.ru), a code hosting platform operated by the Moscow Government. Mos.Hub is a service for storing and working with source code, based on the Git version control system, primarily used by Russian developers and government-related projects. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 15,740,580 | | **Total Repositories** | 16,130 | | **Total Size** | 529 MB (compressed Parquet) | | **Uncompressed Size** | ~29 GB | | **Programming Languages** | 297 | | **File Format** | Parquet (single file) | ### Key Features - **Russian code corpus**: Contains code from repositories hosted on Moscow s official code platform, featuring Russian comments and documentation - **Diverse language coverage**: Spans 297 programming languages identified by [github-linguist](https://github.com/github-linguist/linguist) - **Quality filtered**: Binary files and low-quality content have been removed ### Languages The dataset includes 297 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | Ruby | 8,333,731 | | 2 | JavaScript | 1,786,730 | | 3 | YAML | 1,757,614 | | 4 | Vue | 699,171 | | 5 | Markdown | 639,585 | | 6 | Haml | 538,837 | | 7 | GraphQL | 269,485 | | 8 | JSON | 214,354 | | 9 | PHP | 191,150 | | 10 | SVG | 172,884 | | 11 | Shell | 172,451 | | 12 | Go | 88,089 | | 13 | Ignore List | 87,432 | | 14 | SCSS | 80,716 | | 15 | Python | 77,532 | | 16 | C++ | 63,177 | | 17 | HTML+ERB | 62,605 | | 18 | Text | 48,400 | | 19 | Jest Snapshot | 43,638 | | 20 | HTML | 42,489 | | 21 | C | 38,354 | | 22 | reStructuredText | 26,342 | | 23 | Rust | 24,818 | | 24 | E-mail | 23,993 | | 25 | XML | 22,715 | | 26 | Java | 14,807 | | 27 | Gettext Catalog | 14,429 | | 28 | C# | 13,405 | | 29 | CSS | 12,657 | | 30 | Protocol Buffer Text Format | 12,181 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  file_text  | string | Content of the source file (UTF-8 encoded) | |  language  | string | Programming language as identified by [github-linguist](https://github.com/github-linguist/linguist) | |  file_name  | string | Name of the source file | ### Data Format - **Format**: Apache Parquet - **File Structure**: Single file ( data.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       file_text :  package mainnnimport "fmt"nnfunc main() n    fmt.Println("Hello")nn ,  language :  Go ,  file_name :  main.go       ## Dataset Creation ### Source Data All data originates from public repositories hosted on [Mos.Hub](https://hub.mos.ru). ### Language Detection Programming languages are detected using [github-linguist](https://github.com/github-linguist/linguist), GitHub s library for detecting programming languages. ### Filtering - **Deduplication**: The dataset has been deduplicated to ensure unique code files - **Binary Files**: Binary files have been removed from the dataset - **UTF-8 Validation**: Files must be valid UTF-8 encoded text ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset has been compiled with an analysis of the licenses used in the repositories to ensure ethical collection and use of the data. Users of this dataset should respect the rights of the authors and use the data responsibly.</description>
<size>554021034</size>
</item><item>
<title>Google Code Archive Dataset</title>
<category>Dataset</category>
<infohash>a342da363792ac5fa018039d5a57c81be74e4b52</infohash>
<guid>https://academictorrents.com/details/a342da363792ac5fa018039d5a57c81be74e4b52</guid>
<link>https://academictorrents.com/details/a342da363792ac5fa018039d5a57c81be74e4b52</link>
<description>## Dataset Description This dataset was compiled from the [Google Code Archive](https://code.google.com/archive/), a preserved snapshot of projects hosted on Google Code, Google s open-source project hosting service that operated from 2006 to 2016. Google Code was one of the major code hosting platforms of its era, hosting hundreds of thousands of open-source projects before its shutdown. The archive provides a unique historical record of open-source development during a formative period of modern software engineering. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 65,825,565 | | **Total Repositories** | 488,618 | | **Total Size** | 47 GB (compressed Parquet) | | **Programming Languages** | 454 | | **File Format** | Parquet with Zstd compression (71 files) | ### Key Features - **Historical open-source corpus**: Contains code from over 488K repositories hosted on Google Code during 2006-2016 - **Diverse language coverage**: Spans 454 programming languages identified by [go-enry](https://github.com/go-enry/go-enry) (based on GitHub Linguist rules) - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size - **Quality filtered**: Extensive filtering to remove vendor code, build artifacts, generated files, and low-quality content - **Era-specific patterns**: Captures coding conventions and library usage from the pre-modern era of software development ### Languages The dataset includes 454 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | Java | 16,331,993 | | 2 | PHP | 12,764,574 | | 3 | HTML | 5,705,184 | | 4 | C++ | 5,090,685 | | 5 | JavaScript | 4,937,765 | | 6 | C | 4,179,202 | | 7 | C# | 3,872,245 | | 8 | Python | 2,207,240 | | 9 | CSS | 1,697,385 | | 10 | Objective-C | 1,186,050 | | 11 | Shell | 639,183 | | 12 | Java Server Pages | 541,498 | | 13 | ActionScript | 540,557 | | 14 | Makefile | 481,563 | | 15 | ASP.NET | 381,389 | | 16 | Smarty | 339,555 | | 17 | Ruby | 331,743 | | 18 | Go | 316,427 | | 19 | Perl | 307,960 | | 20 | Vim Script | 216,236 | | 21 | Lua | 215,226 | | 22 | HTML+PHP | 150,781 | | 23 | HTML+Razor | 149,131 | | 24 | MATLAB | 145,686 | | 25 | Batchfile | 138,523 | | 26 | Pascal | 135,992 | | 27 | Visual Basic .NET | 118,732 | | 28 | TeX | 110,379 | | 29 | Less | 98,221 | | 30 | Unix Assembly | 94,758 | ### Licenses The dataset includes files from repositories with various licenses as specified in the Google Code Archive: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | Apache License 2.0 (asf20) | 21,568,143 | | GNU GPL v3 (gpl3) | 14,843,470 | | GNU GPL v2 (gpl2) | 6,824,185 | | Other Open Source (oos) | 5,433,436 | | MIT License (mit) | 4,754,567 | | GNU LGPL (lgpl) | 4,073,137 | | BSD License (bsd) | 3,787,348 | | Artistic License (art) | 1,910,047 | | Eclipse Public License (epl) | 1,587,289 | | Mozilla Public License 1.1 (mpl11) | 580,102 | | Multiple Licenses (multiple) | 372,457 | | Google Summer of Code (gsoc) | 63,292 | | Public Domain (publicdomain) | 28,092 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the Google Code project | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language as identified by [go-enry](https://github.com/go-enry/go-enry) | |  license  | string | License of the repository (Google Code license identifier) | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: 71 files ( google_code_0000.parquet  to  google_code_0070.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point</description>
<size>50126651493</size>
</item><item>
<title>NotaBug Code Dataset</title>
<category>Dataset</category>
<infohash>ba10507193e4169b1b2420fb81e6b4999840e0f5</infohash>
<guid>https://academictorrents.com/details/ba10507193e4169b1b2420fb81e6b4999840e0f5</guid>
<link>https://academictorrents.com/details/ba10507193e4169b1b2420fb81e6b4999840e0f5</link>
<description># NotaBug Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [NotaBug.org](https://notabug.org), a free code hosting platform that emphasizes software freedom and privacy. NotaBug is built on a fully free software stack and is popular among free software advocates and privacy-conscious developers. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 12,622,961 | | **Total Repositories** | 11,660 | | **Total Size** | 12 GB (compressed Parquet) | | **Programming Languages** | 6,306 (by file extension) | | **File Format** | Parquet with Zstd compression (12 files) | ### Key Features - **Free software focused corpus**: Contains code from repositories on a platform dedicated to software freedom - **Diverse language coverage**: Spans thousands of file types identified by file extension - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size ### Languages The dataset includes files from many programming languages and file types. Languages are detected by file extension. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | C++ | 2,219,208 | | 2 | po | 2,022,441 | | 3 | none | 1,572,451 | | 4 | PHP | 951,354 | | 5 | patch | 637,317 | | 6 | svg | 547,170 | | 7 | XML | 502,139 | | 8 | Python | 392,476 | | 9 | Text | 296,953 | | 10 | JavaScript | 233,368 | | 11 | JSON | 198,981 | | 12 | Scheme | 192,409 | | 13 | Markdown | 182,342 | | 14 | info | 155,078 | | 15 | slackbuild | 154,859 | | 16 | HTML | 149,824 | | 17 | Shell | 133,325 | | 18 | log | 127,393 | | 19 | Makefile | 112,989 | | 20 | INI | 110,537 | | 21 | Lua | 84,303 | | 22 | in | 75,138 | | 23 | Assembly | 74,519 | | 24 | list | 58,346 | | 25 | Java | 48,781 | | 26 | CSS | 48,112 | | 27 | mk | 47,373 | | 28 | dtsi | 43,825 | | 29 | diff | 42,125 | | 30 | el | 41,017 | ### Licenses The dataset includes files from repositories with various licenses: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | mit | 10,029,349 | | mpl-2.0 | 1,178,420 | | unknown | 888,840 | | gpl-2.0 | 333,538 | | gpl-3.0 | 158,975 | | unlicense | 11,805 | | cc-by-4.0 | 8,367 | | bsd-2-clause | 4,718 | | agpl-3.0 | 3,055 | | cc-by-sa-4.0 | 2,309 | | wtfpl | 1,314 | | cc0-1.0 | 1,188 | | bsd-3-clause | 601 | | cc-by-nc-4.0 | 269 | | lgpl-3.0 | 137 | | lgpl-2.1 | 76 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the NotaBug repository (format:  username/repo ) | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language as inferred by file extension | |  license  | string | License of the repository (SPDX identifier or "unknown") | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: 12 files ( notabug_0000.parquet  to  notabug_0011.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       code :  #!/usr/bin/env python2n# -*- coding: utf-8 -*-n# Copyright (C) 2014... ,  repo_name :  intermsofthewhole/libreboot ,  path :  resources/utilities/i945gpu/intel-regs.py ,  language :  Python ,  license :  mit ,  size : 3733      ## Dataset Creation ### Source Data All data originates from public repositories hosted on [NotaBug.org](https://notabug.org). ### Language Detection Programming languages are detected by file extension inference. ### License Detection Licenses are detected by scanning for license files in repositories and matching against known license patterns. Repositories without a detectable license are marked as "unknown". ### File Filtering - **Long Lines**: Files with any line exceeding 1,000 characters were excluded - **Deduplication**: No deduplication was performed on the dataset ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. The license field in each data point indicates the license of the source repository.</description>
<size>12602550935</size>
</item><item>
<title>JihuLab Code Dataset</title>
<category>Dataset</category>
<infohash>004c9565d325cf8951aa23d3a56ebb806898fc7f</infohash>
<guid>https://academictorrents.com/details/004c9565d325cf8951aa23d3a56ebb806898fc7f</guid>
<link>https://academictorrents.com/details/004c9565d325cf8951aa23d3a56ebb806898fc7f</link>
<description># JihuLab Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [JihuLab](https://jihulab.com), a GitLab-based code hosting platform operated by JiHu (GitLab s Chinese joint venture). JihuLab serves as the primary GitLab instance for Chinese developers and enterprises, offering localized services and compliance with Chinese regulations. This dataset is particularly valuable for training code models with Chinese language understanding and enterprise-level coding practices. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 1,853,253 | | **Total Repositories** | 11,589 | | **Total Size** | 1.5 GB (compressed Parquet) / 12.76 GB (uncompressed) | | **Programming Languages** | 304 | | **File Format** | Parquet with Zstd compression | ### Key Features - **Chinese developer ecosystem**: Contains code from JihuLab, GitLab s official Chinese distribution, featuring Chinese comments, documentation, and variable names - **Diverse language coverage**: Spans 304 programming languages identified by [go-enry](https://github.com/go-enry/go-enry) (based on GitHub Linguist rules) - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size - **Enterprise and open-source projects**: Includes code from both individual developers and Chinese enterprises using GitLab - **Quality filtered**: Extensive filtering to remove vendor code, build artifacts, generated files, and low-quality content ### Languages The dataset includes 304 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | Java | 348,517 | | 2 | C | 209,924 | | 3 | JavaScript | 191,164 | | 4 | Python | 172,798 | | 5 | C++ | 136,046 | | 6 | Go | 80,000 | | 7 | TypeScript | 79,067 | | 8 | HTML | 69,173 | | 9 | C# | 64,511 | | 10 | Rust | 50,515 | | 11 | Shell | 43,352 | | 12 | Vue | 40,687 | | 13 | TSX | 36,844 | | 14 | CSS | 34,779 | | 15 | Makefile | 26,227 | | 16 | Ruby | 25,812 | | 17 | PHP | 21,401 | | 18 | CMake | 15,292 | | 19 | Kotlin | 14,220 | | 20 | BitBake | 13,060 | | 21 | SCSS | 10,957 | | 22 | Scala | 9,333 | | 23 | Dart | 9,125 | | 24 | Lua | 7,413 | | 25 | ASP.NET | 7,005 | | 26 | Vim Script | 5,710 | | 27 | Unix Assembly | 5,239 | | 28 | Starlark | 5,134 | | 29 | Objective-C | 4,931 | | 30 | Factor | 4,920 | ### Licenses The dataset includes files from repositories with various licenses. Repositories with restrictive licenses (CC-BY-ND variants, Commons Clause, SSPL) were excluded: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | apache-2.0 | 551,008 | | unknown | 535,320 | | mit | 320,834 | | agpl-3.0 | 169,922 | | gpl-2.0 | 112,829 | | bsd | 65,104 | | cc0-1.0 | 13,557 | | lgpl-3.0 | 12,871 | | lgpl-2.1 | 9,960 | | bsd-3-clause | 9,109 | | bsl-1.1 | 8,972 | | epl-1.0 | 7,494 | | gpl-3.0 | 7,476 | | unlicense | 6,265 | | cc-by-3.0 | 4,717 | | cc-by-nc-sa | 4,339 | | mpl-2.0 | 3,847 | | cc-by-4.0 | 2,459 | | cc-by-nc-sa-4.0 | 1,715 | | cc-by-sa-4.0 | 1,701 | | bsd-2-clause | 1,599 | | cc-by-nc-nd-4.0 | 1,222 | | isc | 520 | | wtfpl | 274 | | cc-by-nc-4.0 | 122 | | cc-by-sa | 13 | | cc-by-sa-3.0 | 4 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the JihuLab repository (format:  username/repo  or  group/subgroup/repo ) | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language as identified by [go-enry](https://github.com/go-enry/go-enry) | |  license  | string | License of the repository (SPDX identifier or "unknown") | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: Single consolidated file ( data.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       code :  package com.example.demo;nnimport org.springframework.boot.*;nimport org.springframework.boot.autoconfigure.*;n... ,  repo_name :  SmallQ/demo ,  path :  src/main/java/com/example/demo/DemoApplication.java ,  language :  Java ,  license :  unknown ,  size : 400      ## Dataset Creation ### Pipeline Overview The dataset was created through a multi-stage pipeline: 1. **Repository Discovery**: Paginated API requests to JihuLab s GitLab API ( /api/v4/projects ) to enumerate public repositories 2. **Branch Selection**: Using the repository s default branch (typically  main  or  master ) 3. **Repository Downloading**: Downloading repository archives via JihuLab s archive endpoint 4. **Content Extraction**: Extracting and filtering source code files 5. **Parquet Generation**: Writing filtered records to Parquet with Zstd compression ### Language Detection Programming languages are detected using [go-enry](https://github.com/go-enry/go-enry), a Go port of GitHub s Linguist library. Only files classified as **Programming** or **Markup** language types are included (Data and Prose types are excluded). ### License Detection Licenses are detected by: 1. Scanning for license files ( LICENSE ,  LICENSE.txt ,  LICENSE.md ,  COPYING , etc.) 2. Matching license text against known patterns (MIT, Apache 2.0, GPL variants, BSD, Creative Commons, etc.) 3. Defaulting to "unknown" if no license can be detected **Blocked Licenses**: The following restrictive licenses are excluded from the dataset: -  cc-by-nd ,  cc-by-nd-2.0 ,  cc-by-nd-3.0 ,  cc-by-nd-4.0  (Creative Commons No-Derivatives) -  commons-clause  -  sspl ,  sspl-1.0  (Server Side Public License) ### File Filtering Extensive filtering is applied to ensure data quality: #### Size Limits | Limit | Value | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | Max repository ZIP size | 48 MB | | Max single file size | 1 MB | | Max line length | 1,000 characters | #### Excluded Directories - **Configuration**:  .git/ ,  .github/ ,  .gitlab/ ,  .vscode/ ,  .idea/ ,  .vs/ ,  .settings/ ,  .eclipse/ ,  .project/ ,  .metadata/  - **Vendor/Dependencies**:  node_modules/ ,  bower_components/ ,  jspm_packages/ ,  vendor/ ,  third_party/ ,  3rdparty/ ,  external/ ,  packages/ ,  deps/ ,  lib/vendor/ ,  target/dependency/ ,  Pods/  - **Build Output**:  build/ ,  dist/ ,  out/ ,  bin/ ,  target/ ,  release/ ,  debug/ ,  .next/ ,  .nuxt/ ,  _site/ ,  _build/ ,  __pycache__/ ,  .pytest_cache/ ,  cmake-build-* ,  .gradle/ ,  .maven/  #### Excluded Files - **Lock Files**:  package-lock.json ,  yarn.lock ,  pnpm-lock.yaml ,  Gemfile.lock ,  Cargo.lock ,  poetry.lock ,  Pipfile.lock ,  composer.lock ,  go.sum ,  mix.lock  - **Minified Files**: Any file containing  .min.  in the name - **Binary Files**:  .exe ,  .dll ,  .so ,  .dylib ,  .a ,  .lib ,  .o ,  .obj ,  .jar ,  .war ,  .ear ,  .class ,  .pyc ,  .pyo ,  .wasm ,  .bin ,  .dat ,  .pdf ,  .doc ,  .docx ,  .xls ,  .xlsx ,  .ppt ,  .pptx ,  .zip ,  .tar ,  .gz ,  .bz2 ,  .7z ,  .rar ,  .jpg ,  .jpeg ,  .png ,  .gif ,  .bmp ,  .ico ,  .svg ,  .mp3 ,  .mp4 ,  .avi ,  .mov ,  .wav ,  .flac ,  .ttf ,  .otf ,  .woff ,  .woff2 ,  .eot  - **System Files**:  .DS_Store ,  thumbs.db  #### Content Filtering - **UTF-8 Validation**: Files must be valid UTF-8 encoded text - **Binary Detection**: Files detected as binary by go-enry are excluded - **Generated Files**: Files with generation markers in the first 500 bytes are excluded: -  generated by ,  do not edit ,  auto-generated ,  autogenerated ,  automatically generated ,  code generator ,  generated code ,  this file is generated ,  @generated ,  &lt;auto-generated  - **Empty Files**: Files that are empty or contain only whitespace are excluded - **Long Lines**: Files with any line exceeding 1,000 characters are excluded - **go-enry Filters**: Additional filtering using go-enry s  IsVendor() ,  IsImage() ,  IsDotFile() ,  IsTest() , and  IsGenerated()  functions - **Documentation-only Repos**: Repositories containing only documentation files (no actual code) are skipped ### Source Data All data originates from public repositories hosted on [JihuLab](https://jihulab.com). ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. The license field in each data point indicates the license of the source repository.</description>
<size>1506303268</size>
</item><item>
<title>GitVerse Code Dataset</title>
<category>Dataset</category>
<infohash>8d4375542da8412f19d7d867413e3c29eeb6f4a0</infohash>
<guid>https://academictorrents.com/details/8d4375542da8412f19d7d867413e3c29eeb6f4a0</guid>
<link>https://academictorrents.com/details/8d4375542da8412f19d7d867413e3c29eeb6f4a0</link>
<description># GitVerse Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [GitVerse](https://gitverse.ru), a Russian code hosting platform and an alternative to GitHub in the Russian developer community. GitVerse is used by Russian developers, enterprises, and open-source projects, making this dataset particularly valuable for training code models with Russian language understanding and Russian coding conventions. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 2,802,994 | | **Total Repositories** | 9,014 | | **Total Size** | 2 GB (compressed Parquet) | | **Programming Languages** | 416 | | **File Format** | Parquet (single file) | ### Key Features - **Russian code corpus**: Contains code from over 9,000 repositories, many featuring Russian comments, documentation, and variable names - **Diverse language coverage**: Spans 416 programming languages identified by [github-linguist](https://github.com/github-linguist/linguist) ### Languages The dataset includes 416 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | C | 580,713 | | 2 | JavaScript | 275,744 | | 3 | C++ | 197,896 | | 4 | Shell | 166,527 | | 5 | Python | 116,065 | | 6 | Markdown | 112,811 | | 7 | TypeScript | 107,867 | | 8 | Java | 88,429 | | 9 | PHP | 80,341 | | 10 | Makefile | 77,619 | | 11 | XML | 75,320 | | 12 | Go | 69,155 | | 13 | C# | 68,185 | | 14 | Text | 65,677 | | 15 | JSON | 64,253 | | 16 | SVG | 58,107 | | 17 | HTML | 43,261 | | 18 | YAML | 40,178 | | 19 | Unity3D Asset | 33,917 | | 20 | Rust | 32,872 | | 21 | LLVM | 29,819 | | 22 | Unix Assembly | 27,672 | | 23 | Roff | 25,884 | | 24 | CSS | 21,809 | | 25 | TSX | 21,637 | | 26 | reStructuredText | 19,683 | | 27 | Perl | 18,576 | | 28 | Gettext Catalog | 17,071 | | 29 | Diff | 14,225 | | 30 | CMake | 14,132 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  file_text  | string | The full text content of the file (UTF-8 encoded) | |  language  | string | Programming language as identified by [github-linguist](https://github.com/github-linguist/linguist) | |  file_name  | string | A unique identifier for the file within the dataset | ### Data Format - **Format**: Apache Parquet - **File Structure**: Single file ( data.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       file_text :  Процедура ОбработкаПроведения(Отказ, Режим)nt// Нерабочий вариант без ошибокn... ,  language :  1C Enterprise ,  file_name :  004_work.code.bsl       ## Dataset Creation ### Language Detection Programming languages are detected using [github-linguist](https://github.com/github-linguist/linguist), GitHub s library for language detection and syntax highlighting. ### Source Data All data originates from public repositories hosted on [GitVerse](https://gitverse.ru). ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. Users of this dataset should respect the rights of the authors and use the data responsibly.</description>
<size>2113649354</size>
</item><item>
<title>GitFlic Code Dataset</title>
<category>Dataset</category>
<infohash>ac3849c0ddceb432d9733ed19a000befe88e1e82</infohash>
<guid>https://academictorrents.com/details/ac3849c0ddceb432d9733ed19a000befe88e1e82</guid>
<link>https://academictorrents.com/details/ac3849c0ddceb432d9733ed19a000befe88e1e82</link>
<description># GitFlic Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [GitFlic](https://gitflic.ru), the first Russian service for storing and working with source code, based on the Git version control system. GitFlic is widely used by Russian developers, enterprises, and open-source projects, making this dataset particularly valuable for training code models with strong Russian language understanding and Russian coding conventions. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 5,975,978 | | **Total Repositories** | 12,527 | | **Total Size** | 6.44 GB (compressed Parquet) | | **Programming Languages** | 690 | | **File Format** | Parquet with Zstd compression (6 files) | ### Key Features - **Russian code corpus**: Contains code from over 12,000 repositories, many featuring Russian comments, documentation, and variable names - **Diverse language coverage**: Spans 690 programming languages identified by [github-linguist](https://github.com/github-linguist/linguist) - **Deduplicated**: The dataset has been deduplicated and filtered to remove binary files - **Quality filtered**: Filtered to ensure data quality and remove non-code content ### Languages The dataset includes 690 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | C | 739,012 | | 2 | Java | 634,899 | | 3 | C++ | 587,528 | | 4 | JavaScript | 422,832 | | 5 | PHP | 365,105 | | 6 | XML | 291,920 | | 7 | Markdown | 211,574 | | 8 | Shell | 207,178 | | 9 | Python | 206,443 | | 10 | Unity3D Asset | 150,654 | | 11 | SVG | 150,136 | | 12 | TypeScript | 141,886 | | 13 | Text | 139,406 | | 14 | JSON | 126,214 | | 15 | HTML | 122,341 | | 16 | Go | 109,740 | | 17 | YAML | 89,416 | | 18 | Roff | 82,609 | | 19 | C# | 77,520 | | 20 | Makefile | 63,594 | | 21 | LLVM | 55,680 | | 22 | Scala | 53,395 | | 23 | Unix Assembly | 49,909 | | 24 | Rust | 35,553 | | 25 | reStructuredText | 35,023 | | 26 | Objective-C | 34,151 | | 27 | Ruby | 33,366 | | 28 | CMake | 33,030 | | 29 | CSS | 31,664 | | 30 | TSX | 31,397 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  file_text  | string | Content of the source file (UTF-8 encoded) | |  language  | string | Programming language as identified by [github-linguist](https://github.com/github-linguist/linguist) | |  file_name  | string | A unique identifier for the file within the dataset | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: 6 files ( gitflic-00000.parquet  to  gitflic-00005.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       file_text :  package com.example.demo;nnimport org.springframework.boot.SpringApplication;n... ,  language :  Java ,  file_name :  Application.java       ## Dataset Creation ### Language Detection Programming languages are detected using [github-linguist](https://github.com/github-linguist/linguist), GitHub s library for language detection. ### Source Data All data originates from public repositories hosted on [GitFlic](https://gitflic.ru). ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. Users of this dataset should respect the rights of the authors and use the data responsibly.</description>
<size>6911382788</size>
</item><item>
<title>Gitee Code Dataset</title>
<category>Dataset</category>
<infohash>e572ddd8459e96ed50ba40f1ee991734805f2259</infohash>
<guid>https://academictorrents.com/details/e572ddd8459e96ed50ba40f1ee991734805f2259</guid>
<link>https://academictorrents.com/details/e572ddd8459e96ed50ba40f1ee991734805f2259</link>
<description># Gitee Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [Gitee](https://gitee.com), China s largest code hosting platform and a leading alternative to GitHub in the Chinese developer community. Gitee is widely used by Chinese developers, enterprises, and open-source projects, making this dataset particularly valuable for training code models with strong Chinese language understanding and Chinese coding conventions. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 819,472,785 | | **Total Repositories** | 3,105,923 | | **Total Size** | 536 GB (compressed Parquet) | | **Programming Languages** | 554 | | **File Format** | Parquet with Zstd compression (468 files) | ### Key Features - **Large-scale Chinese code corpus**: Contains code from over 3 million repositories, many featuring Chinese comments, documentation, and variable names - **Diverse language coverage**: Spans 554 programming languages identified by [go-enry](https://github.com/go-enry/go-enry) (based on GitHub Linguist rules) - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size - **Enterprise and open-source projects**: Includes code from both individual developers and Chinese enterprises - **Quality filtered**: Extensive filtering to remove vendor code, build artifacts, generated files, and low-quality content ### Languages The dataset includes 554 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | Java | 293,439,777 | | 2 | JavaScript | 77,715,425 | | 3 | C | 62,836,721 | | 4 | C++ | 49,134,251 | | 5 | HTML | 46,191,063 | | 6 | Vue | 40,468,646 | | 7 | PHP | 37,132,954 | | 8 | C# | 33,842,369 | | 9 | Python | 25,192,704 | | 10 | CSS | 20,802,464 | | 11 | TypeScript | 20,122,528 | | 12 | Go | 16,176,561 | | 13 | Shell | 8,371,429 | | 14 | Makefile | 6,341,964 | | 15 | Java Server Pages | 6,224,523 | | 16 | TSX | 5,768,542 | | 17 | CMake | 5,581,774 | | 18 | SCSS | 5,291,031 | | 19 | Objective-C | 4,922,736 | | 20 | Less | 4,669,672 | | 21 | Ruby | 3,027,385 | | 22 | Kotlin | 2,986,211 | | 23 | Scala | 2,869,640 | | 24 | Rust | 2,466,122 | | 25 | Starlark | 2,027,514 | | 26 | Dart | 2,010,079 | | 27 | Unix Assembly | 1,900,320 | | 28 | Fluent | 1,882,380 | | 29 | HTML+Razor | 1,863,914 | | 30 | Swift | 1,607,477 | ### Licenses The dataset includes files from repositories with various licenses. Repositories with restrictive licenses (CC-BY-ND variants, Commons Clause, SSPL) were excluded: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | apache-2.0 | 273,706,950 | | mit | 201,880,040 | | unknown | 195,868,240 | | agpl-3.0 | 60,181,320 | | bsd | 30,013,190 | | gpl-2.0 | 27,831,530 | | lgpl-3.0 | 11,746,750 | | lgpl-2.1 | 4,807,600 | | bsd-3-clause | 4,442,480 | | cc0-1.0 | 3,144,920 | | gpl-3.0 | 1,631,590 | | unlicense | 1,181,930 | | bsd-2-clause | 1,154,300 | | epl-1.0 | 1,045,470 | | Other licenses | ~5,800,000 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the Gitee repository (format:  username/repo ) | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language as identified by [go-enry](https://github.com/go-enry/go-enry) | |  license  | string | License of the repository (SPDX identifier or "unknown") | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: 468 files ( gitee_0000.parquet  to  gitee_0467.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point       code :  package com.example.demo;nnimport org.springframework.boot.SpringApplication;n... ,  repo_name :  username/spring-demo ,  path :  src/main/java/com/example/demo/Application.java ,  language :  Java ,  license :  apache-2.0 ,  size : 1234      ## Dataset Creation ### Pipeline Overview The dataset was created through a multi-stage pipeline: 1. **Repository Discovery** 2. **Branch Selection**: Selecting the main branch for each repository (priority:  master  &gt;  main  &gt;  develop  &gt;  dev  &gt; first branch) 3. **Repository Downloading** 4. **Content Extraction**: Extracting and filtering source code files 5. **Parquet Generation**: Writing filtered records to Parquet shards with Zstd compression ### Language Detection Programming languages are detected using [go-enry](https://github.com/go-enry/go-enry), a Go port of GitHub s Linguist library. Only files classified as **Programming** or **Markup** language types are included (Data and Prose types are excluded). ### License Detection Licenses are detected by: 1. Scanning for license files ( LICENSE ,  LICENSE.txt ,  LICENSE.md ,  COPYING , etc.) 2. Matching license text against known patterns (MIT, Apache 2.0, GPL variants, BSD, Creative Commons, etc.) 3. Defaulting to "unknown" if no license can be detected **Blocked Licenses**: The following restrictive licenses are excluded from the dataset: -  cc-by-nd ,  cc-by-nd-2.0 ,  cc-by-nd-3.0 ,  cc-by-nd-4.0  (Creative Commons No-Derivatives) -  commons-clause  -  sspl ,  sspl-1.0  (Server Side Public License) ### File Filtering Extensive filtering is applied to ensure data quality: #### Size Limits | Limit | Value | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | Max repository ZIP size | 48 MB | | Max single file size | 1 MB | | Max line length | 1,000 characters | #### Excluded Directories - **Configuration**:  .git/ ,  .github/ ,  .gitlab/ ,  .vscode/ ,  .idea/ ,  .vs/ ,  .settings/ ,  .eclipse/ ,  .project/ ,  .metadata/  - **Vendor/Dependencies**:  node_modules/ ,  bower_components/ ,  jspm_packages/ ,  vendor/ ,  third_party/ ,  3rdparty/ ,  external/ ,  packages/ ,  deps/ ,  lib/vendor/ ,  target/dependency/ ,  Pods/  - **Build Output**:  build/ ,  dist/ ,  out/ ,  bin/ ,  target/ ,  release/ ,  debug/ ,  .next/ ,  .nuxt/ ,  _site/ ,  _build/ ,  __pycache__/ ,  .pytest_cache/ ,  cmake-build-* ,  .gradle/ ,  .maven/  #### Excluded Files - **Lock Files**:  package-lock.json ,  yarn.lock ,  pnpm-lock.yaml ,  Gemfile.lock ,  Cargo.lock ,  poetry.lock ,  Pipfile.lock ,  composer.lock ,  go.sum ,  mix.lock  - **Minified Files**: Any file containing  .min.  in the name - **Binary Files**:  .exe ,  .dll ,  .so ,  .dylib ,  .a ,  .lib ,  .o ,  .obj ,  .jar ,  .war ,  .ear ,  .class ,  .pyc ,  .pyo ,  .wasm ,  .bin ,  .dat ,  .pdf ,  .doc ,  .docx ,  .xls ,  .xlsx ,  .ppt ,  .pptx ,  .zip ,  .tar ,  .gz ,  .bz2 ,  .7z ,  .rar ,  .jpg ,  .jpeg ,  .png ,  .gif ,  .bmp ,  .ico ,  .svg ,  .mp3 ,  .mp4 ,  .avi ,  .mov ,  .wav ,  .flac ,  .ttf ,  .otf ,  .woff ,  .woff2 ,  .eot  - **System Files**:  .DS_Store ,  thumbs.db  #### Content Filtering - **UTF-8 Validation**: Files must be valid UTF-8 encoded text - **Binary Detection**: Files detected as binary by go-enry are excluded - **Generated Files**: Files with generation markers in the first 500 bytes are excluded: -  generated by ,  do not edit ,  auto-generated ,  autogenerated ,  automatically generated ,  code generator ,  generated code ,  this file is generated ,  @generated ,  &lt;auto-generated  - **Empty Files**: Files that are empty or contain only whitespace are excluded - **Long Lines**: Files with any line exceeding 1,000 characters are excluded - **go-enry Filters**: Additional filtering using go-enry s  IsVendor() ,  IsImage() ,  IsDotFile() ,  IsTest() , and  IsGenerated()  functions - **Documentation-only Repos**: Repositories containing only documentation files (no actual code) are skipped ### Source Data All data originates from public repositories hosted on [Gitee](https://gitee.com). ## Considerations for Using the Data ### Personal and Sensitive Information The dataset may contain: - Email addresses in code comments or configuration files - API keys or credentials that were accidentally committed - Personal information in comments or documentation Users should exercise caution and implement appropriate filtering when using this data. ### Licensing Information This dataset is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. The license field in each data point indicates the license of the source repository.</description>
<size>574748274837</size>
</item><item>
<title>GitCode Code Dataset</title>
<category>Dataset</category>
<infohash>c1dd455d6f9bbeb7f1e88de92a952df5e04f64c4</infohash>
<guid>https://academictorrents.com/details/c1dd455d6f9bbeb7f1e88de92a952df5e04f64c4</guid>
<link>https://academictorrents.com/details/c1dd455d6f9bbeb7f1e88de92a952df5e04f64c4</link>
<description># GitCode Code Dataset ## Dataset Description This dataset was compiled from code repositories hosted on [GitCode](https://gitcode.com), a code hosting platform in China backed by CSDN (China Software Developer Network). GitCode serves as a domestic alternative to GitHub, widely used by Chinese developers, students, and enterprises for hosting open-source projects and educational resources, making this dataset particularly valuable for training code models with Chinese language understanding and Chinese coding conventions. ### Dataset Summary | Statistic | Value | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| | **Total Files** | 48,142,567 | | **Total Repositories** | 85,632 | | **Total Size** | 40 GB (compressed Parquet) | | **Programming Languages** | 537 | | **File Format** | Parquet with Zstd compression (34 files) | ### Key Features - **Chinese code corpus**: Contains code from over 85,000 repositories, many featuring Chinese comments, documentation, and variable names - **Diverse language coverage**: Spans 537 programming languages identified by [go-enry](https://github.com/go-enry/go-enry) (based on GitHub Linguist rules) - **Rich metadata**: Includes repository name, file path, detected language, license information, and file size - **Open-source and educational projects**: Includes code from individual developers, students, and Chinese enterprises - **Quality filtered**: Extensive filtering to remove vendor code, build artifacts, generated files, and low-quality content ### Languages The dataset includes 537 programming languages. The top 30 languages by file count: | Rank | Language | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1 | C++ | 9,513,619 | | 2 | C | 8,220,317 | | 3 | Java | 5,362,924 | | 4 | Python | 3,428,302 | | 5 | TypeScript | 3,166,959 | | 6 | JavaScript | 2,540,280 | | 7 | HTML | 1,578,824 | | 8 | Kotlin | 1,413,651 | | 9 | C# | 1,232,638 | | 10 | Go | 1,159,708 | | 11 | Rust | 812,959 | | 12 | Dart | 767,731 | | 13 | TSX | 749,355 | | 14 | PHP | 663,953 | | 15 | Shell | 629,436 | | 16 | Vue | 563,754 | | 17 | Makefile | 471,588 | | 18 | CMake | 460,428 | | 19 | CSS | 381,628 | | 20 | Ruby | 350,213 | | 21 | Objective-C | 347,251 | | 22 | LLVM | 297,591 | | 23 | Unix Assembly | 291,826 | | 24 | Swift | 206,725 | | 25 | Objective-C++ | 160,526 | | 26 | Scala | 157,367 | | 27 | QML | 157,088 | | 28 | Lua | 149,114 | | 29 | SCSS | 141,661 | | 30 | GLSL | 129,124 | ### Licenses The dataset includes files from repositories with various licenses. Repositories with restrictive licenses (CC-BY-ND variants, Commons Clause, SSPL) were excluded: | License | File Count | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | unknown | 23,567,463 | | apache-2.0 | 8,722,445 | | mit | 7,743,613 | | gpl-2.0 | 3,528,526 | | agpl-3.0 | 2,300,580 | | lgpl | 1,013,654 | | bsd-3-clause | 528,980 | | gpl-3.0 | 305,332 | | public-domain | 163,493 | | bsd-2-clause | 94,426 | | bsd | 69,967 | | isc | 36,117 | | unlicense | 28,411 | | cc0-1.0 | 26,799 | | mpl-2.0 | 9,459 | | Other licenses | ~5,000 | ## Dataset Structure ### Data Fields | Field | Type | Description | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |  code  | string | Content of the source file (UTF-8 encoded) | |  repo_name  | string | Name of the GitCode repository (format:  username/repo ) | |  path  | string | Path of the file within the repository (relative to repo root) | |  language  | string | Programming language as identified by [go-enry](https://github.com/go-enry/go-enry) | |  license  | string | License of the repository (SPDX identifier or "unknown") | |  size  | int64 | Size of the source file in bytes | ### Data Format - **Format**: Apache Parquet with Zstd compression - **File Structure**: 34 files ( gitcode_0000.parquet  to  gitcode_0033.parquet ) ### Data Splits All examples are in the train split. There is no validation or test split. ### Example Data Point</description>
<size>42220690870</size>
</item><item>
<title>Russian QnA 333K</title>
<category>Dataset</category>
<infohash>8282e7191ddec974eb94c56ce83d424cc8184204</infohash>
<guid>https://academictorrents.com/details/8282e7191ddec974eb94c56ce83d424cc8184204</guid>
<link>https://academictorrents.com/details/8282e7191ddec974eb94c56ce83d424cc8184204</link>
<description># Dataset Card for Russian QnA ### Dataset Summary This dataset contains a collection of questions and answers in Russian. The dataset includes questions across various categories with corresponding answers, ratings, and metadata. ### Languages The dataset content is primarily in Russian: - Russian (ru) ## Dataset Structure ### Data Files - Single file containing all Q&amp;A records:  data.parquet  ### Data Fields Each record contains the following fields: -  question_id : Unique identifier for the question. -  question_title : Title/subject of the question. -  question_description : Extended description or body of the question. -  question_images : Array of image URLs associated with the question. -  category : Category/topic area of the question (e.g., "здоровье и медицина"). -  tags : Array of tags associated with the question. -  question_rating : Rating/score of the question. -  answers : Array of answer objects, each containing: -  answer_text : Text content of the answer -  answer_images : Array of image URLs in the answer -  answer_rating : Rating/score of the answer ### Data Splits The dataset contains a single split with all Q&amp;A records: | Split   | Description                      | Number of Examples | | :&amp;mdash;&amp;mdash;&amp;mdash; | :&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- | &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-: | |  train  | All question-answer pairs        | 333,029            |</description>
<size>225873026</size>
</item><item>
<title>enwiki-20260101-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>54315dcb073c43c81fecdcdc941b7100c2169259</infohash>
<guid>https://academictorrents.com/details/54315dcb073c43c81fecdcdc941b7100c2169259</guid>
<link>https://academictorrents.com/details/54315dcb073c43c81fecdcdc941b7100c2169259</link>
<description>English Wikipedia Multistream Index 2026-01-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding multistream file: https://academictorrents.com/details/e7d78d128db80266830e64c0142a67d0c5413ced</description>
<size>277651040</size>
</item><item>
<title>enwiki-20260101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>e7d78d128db80266830e64c0142a67d0c5413ced</infohash>
<guid>https://academictorrents.com/details/e7d78d128db80266830e64c0142a67d0c5413ced</guid>
<link>https://academictorrents.com/details/e7d78d128db80266830e64c0142a67d0c5413ced</link>
<description>English Wikipedia Multistream 2026-01-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding index file: https://academictorrents.com/details/54315dcb073c43c81fecdcdc941b7100c2169259</description>
<size>25897924587</size>
</item><item>
<title>Stack Exchange Data Dump (2025-12-31)</title>
<category>Dataset</category>
<infohash>0d1d597fa7809f0e85f127b5eb3088219ddbad39</infohash>
<guid>https://academictorrents.com/details/0d1d597fa7809f0e85f127b5eb3088219ddbad39</guid>
<link>https://academictorrents.com/details/0d1d597fa7809f0e85f127b5eb3088219ddbad39</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2025-12-31. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z Note that the 2025-06-30 data dump was identified to contain watermarks with bogus data entries. At the time of the upload, no watermarks have been identified for the 2025-12-31 data dump. See https://github.com/LunarWatcher/se-data-dump-transformer/blob/master/docs/meta/Known%20watermarks.md for a community-compiled list of known watermarks, and updates on whether this data dump contains fingerprinting or not. This torrent has also been archived at https://archive.org/details/stackexchange_20251231</description>
<size>98336346062</size>
</item><item>
<title>314T Pi Part 4</title>
<category>Dataset</category>
<infohash>5a84284bc2773f8d64e2042a0731f5a73ca342ea</infohash>
<guid>https://academictorrents.com/details/5a84284bc2773f8d64e2042a0731f5a73ca342ea</guid>
<link>https://academictorrents.com/details/5a84284bc2773f8d64e2042a0731f5a73ca342ea</link>
<description/>
<size>33052632018944</size>
</item><item>
<title>314T Pi Part 3</title>
<category>Dataset</category>
<infohash>cd004a3444233c52395ed100d1cdec49b35d4e5a</infohash>
<guid>https://academictorrents.com/details/cd004a3444233c52395ed100d1cdec49b35d4e5a</guid>
<link>https://academictorrents.com/details/cd004a3444233c52395ed100d1cdec49b35d4e5a</link>
<description/>
<size>33052632018944</size>
</item><item>
<title>314T Pi Part 2</title>
<category>Dataset</category>
<infohash>30569eac2a9ac9580856a31bd314384b0fccaa73</infohash>
<guid>https://academictorrents.com/details/30569eac2a9ac9580856a31bd314384b0fccaa73</guid>
<link>https://academictorrents.com/details/30569eac2a9ac9580856a31bd314384b0fccaa73</link>
<description/>
<size>33052632018944</size>
</item><item>
<title>314T Pi Part 1</title>
<category>Dataset</category>
<infohash>9dcad1ceda73694af896610c4279f2a642a95414</infohash>
<guid>https://academictorrents.com/details/9dcad1ceda73694af896610c4279f2a642a95414</guid>
<link>https://academictorrents.com/details/9dcad1ceda73694af896610c4279f2a642a95414</link>
<description/>
<size>33052632018944</size>
</item><item>
<title>Wikipedia African languages 2026-01-01</title>
<category>Dataset</category>
<infohash>ab209f47f91d718ade7910121bc875001379be01</infohash>
<guid>https://academictorrents.com/details/ab209f47f91d718ade7910121bc875001379be01</guid>
<link>https://academictorrents.com/details/ab209f47f91d718ade7910121bc875001379be01</link>
<description>Wikipedia database dumps of African language wikis of 2000 articles or more. Wikipedia Multistream 2026-01-01. These 29 languages are included: Afrikaans, Amharic, Arabic, Dagbani, Egyptian Arabic, Fon, Fula, Ganda, Ghanian Pidgin, Hausa, Igbo, Kabyle, Kinyarwanda, Lingala, Malagasy, Moroccan Arabic, Northern Soho, Shona, Somali, Southern Dagaare, Standard Moroccan Tamazight, Swahili, Tachelhit, Tswana, Tumbuka, Twi, Xhosa, Yoruba, Zulu.</description>
<size>3052544534</size>
</item><item>
<title>Reddit comments/submissions 2025-11</title>
<category>Dataset</category>
<infohash>2d056b22743718ac81915f25b094b6226668663f</infohash>
<guid>https://academictorrents.com/details/2d056b22743718ac81915f25b094b6226668663f</guid>
<link>https://academictorrents.com/details/2d056b22743718ac81915f25b094b6226668663f</link>
<description>Reddit comments and submissions from 2025-11 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>54516332195</size>
</item><item>
<title>Main dataset for PatchSeeker paper</title>
<category>Dataset</category>
<infohash>bbad518053332f3578794b476405555a58232d77</infohash>
<guid>https://academictorrents.com/details/bbad518053332f3578794b476405555a58232d77</guid>
<link>https://academictorrents.com/details/bbad518053332f3578794b476405555a58232d77</link>
<description/>
<size>310896916665</size>
</item><item>
<title>enwiki-20251201-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>963d0890784aacfe7d2f9c3a703cecfcfbbe0930</infohash>
<guid>https://academictorrents.com/details/963d0890784aacfe7d2f9c3a703cecfcfbbe0930</guid>
<link>https://academictorrents.com/details/963d0890784aacfe7d2f9c3a703cecfcfbbe0930</link>
<description>English Wikipedia Multistream Index 2025-12-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding multistream file: https://academictorrents.com/details/3d6a771c09c048bd7e7ee470391b739bfd1b39c1</description>
<size>276690635</size>
</item><item>
<title>enwiki-20251201-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>3d6a771c09c048bd7e7ee470391b739bfd1b39c1</infohash>
<guid>https://academictorrents.com/details/3d6a771c09c048bd7e7ee470391b739bfd1b39c1</guid>
<link>https://academictorrents.com/details/3d6a771c09c048bd7e7ee470391b739bfd1b39c1</link>
<description>English Wikipedia Multistream 2025-12-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding index file: https://academictorrents.com/details/963d0890784aacfe7d2f9c3a703cecfcfbbe0930</description>
<size>25803484989</size>
</item><item>
<title>Wikipedia European languages 2025-12-01</title>
<category>Dataset</category>
<infohash>6dce84d7905e4e632cbcaadbccce72fe83511528</infohash>
<guid>https://academictorrents.com/details/6dce84d7905e4e632cbcaadbccce72fe83511528</guid>
<link>https://academictorrents.com/details/6dce84d7905e4e632cbcaadbccce72fe83511528</link>
<description>Wikipedia database dumps of European language wikis of 10k articles or more. enwiki excluded. Wikipedia Multistream 2025-12-01. These 68 languages are included: Albanian, Alemannic, Aragonese, Asturian, Basque, Bavarian, Belarusian, Benetian, Bosnian, Breton, Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, Emilian-Romagnol, Esperanto, Estonian, Faroese, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Ladin, Latin, Latvian, Ligurian, Limburgish, Lithuanian, Lombard, Low German, Luxembourgish, Macedonian, Maltese, Neapolitan, North Frisian, Norwegian, Nynorsk, Occitan, Piedmontese, Polish, Portuguese, Romanian, Romansh, Rusyn, Samogitian, Scots, Scottish Gaelic, Serbian, Serbo-Croatian, Sicilian, Silesian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Upper Sorbian, Walloon, Welsh, West Frisian, Yiddish.</description>
<size>52127652208</size>
</item><item>
<title>frwiki-20251001-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>2a17388783d71f69087a8e588c037eb92b2bdb5b</infohash>
<guid>https://academictorrents.com/details/2a17388783d71f69087a8e588c037eb92b2bdb5b</guid>
<link>https://academictorrents.com/details/2a17388783d71f69087a8e588c037eb92b2bdb5b</link>
<description>Il s agit d un dump de l index de Wikipédia en langue française, pris le 01/10/2025, au format multiflux, utile pour vérifier l intégrité du dump de Wikipédia fait à la même date. Pour plus d informations, veuillez consulter [1]. Ce contenu est sous licence Creative Commons Attribution Partage dans les Mêmes Conditions version 4.0 (CC-BY-SA 4.0). Plus d informations sont disponibles sur [2]. L autorisation d utilisation et de distribution, y compris via BitTorrent, est accordée conformément aux conditions d utilisation de la Fondation Wikimedia, disponibles sur [3]. Des informations supplémentaires sur les droits d auteur sont disponibles à l adresse [4] qui peuvent également être informatives. This is a dump of the French language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>62365151</size>
</item><item>
<title>frwiki-20251001-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>41c5843a5190f19d3471063279f0077209495b49</infohash>
<guid>https://academictorrents.com/details/41c5843a5190f19d3471063279f0077209495b49</guid>
<link>https://academictorrents.com/details/41c5843a5190f19d3471063279f0077209495b49</link>
<description>Il s agit d un dump de la Wikipédia en langue française, pris le 01/10/2025, au format multiflux. Pour plus d informations, veuillez consulter [1]. Ce contenu est sous licence Creative Commons Attribution Partage dans les Mêmes Conditions version 4.0 (CC-BY-SA 4.0). Plus d informations sont disponibles sur [2]. L autorisation d utilisation et de distribution, y compris via BitTorrent, est accordée conformément aux conditions d utilisation de la Fondation Wikimedia, disponibles sur [3]. Des informations supplémentaires sur les droits d auteur sont disponibles à l adresse [4] qui peuvent également être informatives. This is a dump of the French language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>6931829432</size>
</item><item>
<title>frwiki-20251101-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>ca16180467b0470653592d9b26153a9fe1d23591</infohash>
<guid>https://academictorrents.com/details/ca16180467b0470653592d9b26153a9fe1d23591</guid>
<link>https://academictorrents.com/details/ca16180467b0470653592d9b26153a9fe1d23591</link>
<description>Il s agit d un dump de l index de Wikipédia en langue française, pris le 01/11/2025, au format multiflux, utile pour vérifier l intégrité du dump de Wikipédia fait à la même date. Pour plus d informations, veuillez consulter [1]. Ce contenu est sous licence Creative Commons Attribution Partage dans les Mêmes Conditions version 4.0 (CC-BY-SA 4.0). Plus d informations sont disponibles sur [2]. L autorisation d utilisation et de distribution, y compris via BitTorrent, est accordée conformément aux conditions d utilisation de la Fondation Wikimedia, disponibles sur [3]. Des informations supplémentaires sur les droits d auteur sont disponibles à l adresse [4] qui peuvent également être informatives. This is a dump of the French language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>62529375</size>
</item><item>
<title>frwiki-20251101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>8c469bfba755ff3e8a8a9017ede0aa543500197a</infohash>
<guid>https://academictorrents.com/details/8c469bfba755ff3e8a8a9017ede0aa543500197a</guid>
<link>https://academictorrents.com/details/8c469bfba755ff3e8a8a9017ede0aa543500197a</link>
<description>Il s agit d un dump de Wikipédia en langue française, pris le 01/11/2025, au format multiflux. Pour plus d informations, veuillez consulter [1]. Ce contenu est sous licence Creative Commons Attribution Partage dans les Mêmes Conditions version 4.0 (CC-BY-SA 4.0). Plus d informations sont disponibles sur [2]. L autorisation d utilisation et de distribution, y compris via BitTorrent, est accordée conformément aux conditions d utilisation de la Fondation Wikimedia, disponibles sur [3]. Des informations supplémentaires sur les droits d auteur sont disponibles à l adresse [4] qui peuvent également être informatives. This is a dump of the French language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>6963574312</size>
</item><item>
<title>English Wikipedia dump from May 2022 (wikipedia_en_all_maxi_2022-05, pre-ChatGPT)</title>
<category>Dataset</category>
<infohash>ab13148ab9b64f11c9548fb87bf05f8ce64cb15a</infohash>
<guid>https://academictorrents.com/details/ab13148ab9b64f11c9548fb87bf05f8ce64cb15a</guid>
<link>https://academictorrents.com/details/ab13148ab9b64f11c9548fb87bf05f8ce64cb15a</link>
<description>English-language Wikipedia dump published on May 2022 by the Kiwix Project ( https://kiwix.org/ ). It contains all articles, complete with extra data-files (images, audio present in the articles). This snapshot was taken before the common spread of LLM technologies, and might provide a point of reference in the future. It is also suitable for launching private/local/offline wikipedia mirrors. To read ZIM files in a browser, launch a kiwix-serve server ( https://wiki.kiwix.org/wiki/Kiwix-serve ). Alternatively, pick a suitable ZIM reader app for your platform ( https://kiwix.org/en/applications/ ). Documentation for the file-format at https://wiki.openzim.org/wiki/ZIM_file_format .</description>
<size>95199730590</size>
</item><item>
<title>Wikimedia Enterprise HTML Dump 2025-03-20</title>
<category>Dataset</category>
<infohash>088035ffe42ce7276645c4217bab9296324b5474</infohash>
<guid>https://academictorrents.com/details/088035ffe42ce7276645c4217bab9296324b5474</guid>
<link>https://academictorrents.com/details/088035ffe42ce7276645c4217bab9296324b5474</link>
<description>Formerly Wikimedia Enterprise exclusive (now public) static HTML dump of all of Wikimedia aka Wikipedia. Effective: 2025-03-20 See also: * https://meta.wikimedia.org/wiki/Data_dump_torrents#Wikipedia_Enterprise_Static_HTML * https://dumps.wikimedia.org/other/enterprise_html/ * https://dumps.wikimedia.org/</description>
<size>983380932428</size>
</item><item>
<title>Reddit comments/submissions 2025-10</title>
<category>Dataset</category>
<infohash>cb4fa22ea76ea0a2bb38885b27323c94a5d9d16c</infohash>
<guid>https://academictorrents.com/details/cb4fa22ea76ea0a2bb38885b27323c94a5d9d16c</guid>
<link>https://academictorrents.com/details/cb4fa22ea76ea0a2bb38885b27323c94a5d9d16c</link>
<description>Reddit comments and submissions from 2025-10 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>56113055913</size>
</item><item>
<title>enwiki-20251101-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>4fa9b17b68b06e5bff43001579d0a7f0be29f399</infohash>
<guid>https://academictorrents.com/details/4fa9b17b68b06e5bff43001579d0a7f0be29f399</guid>
<link>https://academictorrents.com/details/4fa9b17b68b06e5bff43001579d0a7f0be29f399</link>
<description>English Wikipedia Multistream Index 2025-11-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding multistream file: https://academictorrents.com/details/1383d067a266af4a163f591531c9b64af458b107</description>
<size>275924420</size>
</item><item>
<title>enwiki-20251101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>1383d067a266af4a163f591531c9b64af458b107</infohash>
<guid>https://academictorrents.com/details/1383d067a266af4a163f591531c9b64af458b107</guid>
<link>https://academictorrents.com/details/1383d067a266af4a163f591531c9b64af458b107</link>
<description>English Wikipedia Multistream 2025-11-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding index file: https://academictorrents.com/details/4fa9b17b68b06e5bff43001579d0a7f0be29f399</description>
<size>25714481462</size>
</item><item>
<title>Wikipedia Asian languages 2025-11-01</title>
<category>Dataset</category>
<infohash>515a4ffb36640e60998e05d65a6f05aea9a86c87</infohash>
<guid>https://academictorrents.com/details/515a4ffb36640e60998e05d65a6f05aea9a86c87</guid>
<link>https://academictorrents.com/details/515a4ffb36640e60998e05d65a6f05aea9a86c87</link>
<description>Wikipedia database dumps of Asian language wikis of 10k articles or more. Wikipedia Multistream 2025-11-01. These 85 languages are included: Acehnese, Armenian, Assamese, Azerbaijani, Balinese, Bangla, Banjar, Banyumasan, Bashkir, Bishnupriya, Buginese, Burmese, Cantonese, Cebuno, Central Bikol, Central Kurdhish, Chechen, Chinese, Chuvash, Classical Chinese, Dimli, Eastern Mari, Georgian, Gilaki, Gorontalo, Gujarati, Hakka, Hebrew, Hindi, Iloko, Indonesian, Japanese, Javanese, Kannada, Kara-Kalpak, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Maithili, Malay, Malayalam, Manipuri, Marathi, Mazanderani, Minangkabau, Mindong, Mingrelian, Minnan, Mongolian, Nepali, Newari, Odia, Ossetic, Pampangan, Pashto, Persian, Punjabi, Russian, Sanskrit, Santali, Saraiki, Shan, Sindhi, Sinhala, South Azerbaijani, Sundanese, Tagalog, Tajik, Talysh, Tamil, Tatar, Telugu, Thai, Turkish, Urdu, Uzbek, Vietnamese, Waray, Western Armenian, Western Mari, Western Punjabi, Wu, Yakut.</description>
<size>30672631789</size>
</item><item>
<title>Computation Structures (MIT 6.004) - Video lectures 2017</title>
<category>Course</category>
<infohash>87292517b2cd3f16e1ac142112b6a99fe4dfe712</infohash>
<guid>https://academictorrents.com/details/87292517b2cd3f16e1ac142112b6a99fe4dfe712</guid>
<link>https://academictorrents.com/details/87292517b2cd3f16e1ac142112b6a99fe4dfe712</link>
<description>This course introduces architecture of digital systems, emphasizing structural principles common to a wide range of technologies. It covers the topics including multilevel implementation strategies, definition of new primitives (e.g., gates, instructions, procedures, processes) and their mechanization using lower-level elements. It also includes analysis of potential concurrency, precedence constraints and performance measures, pipelined and multidimensional systems, instruction set design issues and architectural support for contemporary software structures. Main course URL: https://ocw.mit.edu/6-004S17 This content hosted at the Internet Archive at https://archive.org/details/MIT6.004S17 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/MIT6.004S17/MIT6.004S17_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file MIT6.004S17_meta.xml contains metadata about this torrent s contents.</description>
<size>8027897856</size>
</item><item>
<title>Reddit comments/submissions 2025-09</title>
<category>Dataset</category>
<infohash>a92ce24b4180e4aa9295353f4d26f050031e3058</infohash>
<guid>https://academictorrents.com/details/a92ce24b4180e4aa9295353f4d26f050031e3058</guid>
<link>https://academictorrents.com/details/a92ce24b4180e4aa9295353f4d26f050031e3058</link>
<description>Reddit comments and submissions from 2025-09 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>55663729678</size>
</item><item>
<title>enwiki-20251001-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>a514e8e39fc4cb3c20f0e7251fd9a08a5088d42b</infohash>
<guid>https://academictorrents.com/details/a514e8e39fc4cb3c20f0e7251fd9a08a5088d42b</guid>
<link>https://academictorrents.com/details/a514e8e39fc4cb3c20f0e7251fd9a08a5088d42b</link>
<description>English Wikipedia Multistream 2025-10-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding index file: https://academictorrents.com/details/c1063d7eeaa77bcf0bfada744b840e32c8848090</description>
<size>25601050952</size>
</item><item>
<title>enwiki-20251001-pages-articles-multistream-index.txt.bz2</title>
<category>Dataset</category>
<infohash>c1063d7eeaa77bcf0bfada744b840e32c8848090</infohash>
<guid>https://academictorrents.com/details/c1063d7eeaa77bcf0bfada744b840e32c8848090</guid>
<link>https://academictorrents.com/details/c1063d7eeaa77bcf0bfada744b840e32c8848090</link>
<description>English Wikipedia Multistream Index 2025-10-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download Corresponding multistream file: https://academictorrents.com/details/a514e8e39fc4cb3c20f0e7251fd9a08a5088d42b</description>
<size>275068786</size>
</item><item>
<title>No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes</title>
<category>Dataset</category>
<infohash>011a9b941bd460d219e563eb5eccc470aadd8f20</infohash>
<guid>https://academictorrents.com/details/011a9b941bd460d219e563eb5eccc470aadd8f20</guid>
<link>https://academictorrents.com/details/011a9b941bd460d219e563eb5eccc470aadd8f20</link>
<description>Do large language models (LLMs) anticipate when they will answer correctly? To study this, we extract activations after a question is read but before any tokens are generated, and train linear probes to predict whether the model s forthcoming answer will be correct. Across three open-source model families ranging from 7 to 70 billion parameters, projections on this "in-advance correctness direction" trained on generic trivia questions predict success in distribution and on diverse out-of-distribution knowledge datasets, outperforming black-box baselines and verbalised predicted confidence. Predictive power saturates in intermediate layers, suggesting that self-assessment emerges mid-computation. Notably, generalisation falters on questions requiring mathematical reasoning. Moreover, for models responding "I don t know", doing so strongly correlates with the probe score, indicating that the same direction also captures confidence. By complementing previous results on truthfulness and other behaviours obtained with probes and sparse auto-encoders, our work contributes essential findings to elucidate LLM internals.</description>
<size>615429611520</size>
</item><item>
<title>Stack Exchange Data Dump (2025-09-30)</title>
<category>Dataset</category>
<infohash>5fc984dcd7cf67203c0cb63ad814a17c3d486322</infohash>
<guid>https://academictorrents.com/details/5fc984dcd7cf67203c0cb63ad814a17c3d486322</guid>
<link>https://academictorrents.com/details/5fc984dcd7cf67203c0cb63ad814a17c3d486322</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2025-09-30. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z Note that the 2025-06-30 data dump was identified to contain watermarks with bogus data entries. At the time of the upload, no watermarks have been identified for the 2025-09-30 data dump, but the evaluated risk that there are watermarks is considered likely. See https://github.com/LunarWatcher/se-data-dump-transformer/blob/master/docs/meta/Known%20watermarks.md for a community-compiled list of known watermarks, and updates on whether this data dump contains fingerprinting or not. This torrent has also been archived at https://archive.org/details/stackexchange_20250930</description>
<size>97365409817</size>
</item><item>
<title>Wikipedia African languages 2025-10-01</title>
<category>Dataset</category>
<infohash>ef843283d5e9598f933a1086834b1c8b57fcd3a9</infohash>
<guid>https://academictorrents.com/details/ef843283d5e9598f933a1086834b1c8b57fcd3a9</guid>
<link>https://academictorrents.com/details/ef843283d5e9598f933a1086834b1c8b57fcd3a9</link>
<description>Wikipedia database dumps of African language wikis of 2k articles or more. Wikipedia Multistream 2025-10-01. These 29 languages are included: Afrikaans, Amharic, Arabic, Dagbani, Egyptian Arabic, Fon, Fula, Ganda, Ghanian Pidgin, Hausa, Igbo, Kabyle, Kinyarwanda, Lingala, Malagasy, Moroccan Arabic, Northern Soho, Shona, Somali, Southern Dagaare, Standard Moroccan Tamazight, Swahili, Tachelhit, Tswana, Tumbuka, Twi, Xhosa, Yoruba, Zulu</description>
<size>2965791521</size>
</item><item>
<title>enwiki-20250901-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>d4d8a81c8b5ba37b8960f38e250b34a970ca78fa</infohash>
<guid>https://academictorrents.com/details/d4d8a81c8b5ba37b8960f38e250b34a970ca78fa</guid>
<link>https://academictorrents.com/details/d4d8a81c8b5ba37b8960f38e250b34a970ca78fa</link>
<description>English Wikipedia Multistream 2025-09-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>25481731365</size>
</item><item>
<title>Reddit comments/submissions 2025-08</title>
<category>Dataset</category>
<infohash>c71a97c1f7f676c56963c4e15a81f20afb0109be</infohash>
<guid>https://academictorrents.com/details/c71a97c1f7f676c56963c4e15a81f20afb0109be</guid>
<link>https://academictorrents.com/details/c71a97c1f7f676c56963c4e15a81f20afb0109be</link>
<description>Reddit comments and submissions from 2025-08 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>58018269156</size>
</item><item>
<title>Wikipedia European languages 2025-09-01</title>
<category>Dataset</category>
<infohash>59b3c5bcda9ece205eeb46a63874a414791df604</infohash>
<guid>https://academictorrents.com/details/59b3c5bcda9ece205eeb46a63874a414791df604</guid>
<link>https://academictorrents.com/details/59b3c5bcda9ece205eeb46a63874a414791df604</link>
<description>Wikipedia database dumps of European language wikis of 10k articles or more. enwiki excluded. Wikipedia Multistream 2025-09-01. These 61 languages are included: Albanian, Alemannic, Asturian, Basque, Bavarian, Belarusian, Benetian, Bosnian, Breton, Bulgarian, Catalan, Croatian, Czech, Danish, Dutch, Esperanto, Estonian, Faroese, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Ladin, Latin, Latvian, Ligurian, Limburgish, Lithuanian, Lombard, Low German, Luxembourgish, Macedonian, Maltese, Neapolitan, Northern Frisian, Norwegian, Nynorsk, Occitan, Piedmontese, Polish, Portuguese, Romansh, Romanian, Scots, Scottish Gaelic, Serbian, Serbo-Croatian, Sicilian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Upper Sorbian, Walloon, Welsh.</description>
<size>51294286821</size>
</item><item>
<title>noaa-ncei-International-quality-controlled-ocean-db</title>
<category>Dataset</category>
<infohash>cf9bcf97ab6306a4ed8af838e4f83910d0b4d075</infohash>
<guid>https://academictorrents.com/details/cf9bcf97ab6306a4ed8af838e4f83910d0b4d075</guid>
<link>https://academictorrents.com/details/cf9bcf97ab6306a4ed8af838e4f83910d0b4d075</link>
<description>This data set includes subsurface ocean profiles of temperature, salinity, oxygen, nutrients, ocean tracers, optics, and biology (chlorophyll, plankton) taken from 1772 to 2024 in the global ocean using bottles, CTD, XBT, MBT, profiling floats, moored buoys, ice drifting buoys, gliders, towed profilers, and instrumented pinnipeds. This data set was prepared at NCEI in CF compliant netCDF ragged array format under the direction of the IQuOD project. The IQuOD (International Quality-controlled Ocean Database) effort is being organized by the oceanographic community, and includes experts in data quality and management, climate modelers and the broader climate-related community. The primary focus of IQuOD is to produce and freely distribute the highest quality and complete single ocean profile repository along with (intelligent) metadata and assigned uncertainties for use in ocean climate research applications. This goal will be achieved by developing and implementing an internationally agreed framework. IQuOD v0.1 is a preliminary data set which includes uncertainties on each temperature measurement and intelligent metadata for identifying critical missing information.</description>
<size>211694031687</size>
</item><item>
<title>JEFF-3.3 nuclear data processed by NEA and formatted for OpenMC</title>
<category>Dataset</category>
<infohash>f08d30161db51aea5597aa2ee5e53c52dfaf8679</infohash>
<guid>https://academictorrents.com/details/f08d30161db51aea5597aa2ee5e53c52dfaf8679</guid>
<link>https://academictorrents.com/details/f08d30161db51aea5597aa2ee5e53c52dfaf8679</link>
<description/>
<size>2350318592</size>
</item><item>
<title>ENDF/B-VII.1 nuclear data processed by LANL and formatted for OpenMC</title>
<category>Dataset</category>
<infohash>49a63cef9053a88285cf9e8648e530280921aa8b</infohash>
<guid>https://academictorrents.com/details/49a63cef9053a88285cf9e8648e530280921aa8b</guid>
<link>https://academictorrents.com/details/49a63cef9053a88285cf9e8648e530280921aa8b</link>
<description/>
<size>1649637524</size>
</item><item>
<title>ENDF/B-VII.0 nuclear data processed by LANL as LIB70 and formatted for OpenMC</title>
<category>Dataset</category>
<infohash>dc86b377ce6fda76aac92eacb55d0401c5d29968</infohash>
<guid>https://academictorrents.com/details/dc86b377ce6fda76aac92eacb55d0401c5d29968</guid>
<link>https://academictorrents.com/details/dc86b377ce6fda76aac92eacb55d0401c5d29968</link>
<description/>
<size>782776372</size>
</item><item>
<title>JEFF-3.2 Nuclear Data processed by NEA and formatted for OpenMC</title>
<category>Dataset</category>
<infohash>c02cf6f1659fdda471224d5ee60c72d08ad01cf8</infohash>
<guid>https://academictorrents.com/details/c02cf6f1659fdda471224d5ee60c72d08ad01cf8</guid>
<link>https://academictorrents.com/details/c02cf6f1659fdda471224d5ee60c72d08ad01cf8</link>
<description/>
<size>1954347284</size>
</item><item>
<title>FENDL-3.2 Nuclear Data for OpenMC</title>
<category>Dataset</category>
<infohash>0bf3984552a128eb4a06771712dcb28374ac67c2</infohash>
<guid>https://academictorrents.com/details/0bf3984552a128eb4a06771712dcb28374ac67c2</guid>
<link>https://academictorrents.com/details/0bf3984552a128eb4a06771712dcb28374ac67c2</link>
<description/>
<size>335955104</size>
</item><item>
<title>JEFF-3.3 Nuclear Data processed for OpenMC</title>
<category>Dataset</category>
<infohash>bab935ed2632773a35da8b171e0eb5c8ab2ba508</infohash>
<guid>https://academictorrents.com/details/bab935ed2632773a35da8b171e0eb5c8ab2ba508</guid>
<link>https://academictorrents.com/details/bab935ed2632773a35da8b171e0eb5c8ab2ba508</link>
<description/>
<size>2706223740</size>
</item><item>
<title>END/B-VIII.0 nuclear data processed for OpenMc</title>
<category>Dataset</category>
<infohash>2db3e41a0231b2c14c45c7cc8d30285ccebd60a2</infohash>
<guid>https://academictorrents.com/details/2db3e41a0231b2c14c45c7cc8d30285ccebd60a2</guid>
<link>https://academictorrents.com/details/2db3e41a0231b2c14c45c7cc8d30285ccebd60a2</link>
<description/>
<size>3382208012</size>
</item><item>
<title>ENDF/B-VII.1 nuclear data processed for OpenMC</title>
<category>Dataset</category>
<infohash>be99c621fb8aef3663448d50a91626b202ef007e</infohash>
<guid>https://academictorrents.com/details/be99c621fb8aef3663448d50a91626b202ef007e</guid>
<link>https://academictorrents.com/details/be99c621fb8aef3663448d50a91626b202ef007e</link>
<description/>
<size>1697275216</size>
</item><item>
<title>ENDF/B-VIII.0 nuclear data processed by LANL as Lib80x and prepared for OpenMC</title>
<category>Dataset</category>
<infohash>d8961ddc2ba3f041c1c0215820dcd8722ae25c8e</infohash>
<guid>https://academictorrents.com/details/d8961ddc2ba3f041c1c0215820dcd8722ae25c8e</guid>
<link>https://academictorrents.com/details/d8961ddc2ba3f041c1c0215820dcd8722ae25c8e</link>
<description/>
<size>3663422944</size>
</item><item>
<title>Reddit comments/submissions 2025-07</title>
<category>Dataset</category>
<infohash>b6a7ccf72368a7d39c018c423e01bc15aa551122</infohash>
<guid>https://academictorrents.com/details/b6a7ccf72368a7d39c018c423e01bc15aa551122</guid>
<link>https://academictorrents.com/details/b6a7ccf72368a7d39c018c423e01bc15aa551122</link>
<description>Reddit comments and submissions from 2025-07 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>58351467718</size>
</item><item>
<title>noaa-ncei-land-surface-reflectance</title>
<category>Dataset</category>
<infohash>3d53caff9f8133d8cbf949610e183358ca421a63</infohash>
<guid>https://academictorrents.com/details/3d53caff9f8133d8cbf949610e183358ca421a63</guid>
<link>https://academictorrents.com/details/3d53caff9f8133d8cbf949610e183358ca421a63</link>
<description>AVHRR: This dataset contains gridded daily surface reflectance and brightness temperatures derived from the Advanced Very High Resolution Radiometer (AVHRR) sensors onboard eight NOAA polar orbiting satellites: NOAA-7, -9, -11, -14, -16, -17, -18 and -19. Surface reflectance from AVHRR channels 1 and 2 (at 640 and 860 nm) are a NOAA Climate Data Record (CDR). The dataset spans from 1981 to 10 days before the present, and was processed from the AVHRR Global Area Coverage (GAC) Level 1b dataset. AVHRR GAC observations are packaged into data arrays with latitude and longitude dimensions of 3600 x 7200 covering the globe at 0.05 degree spatial resolution. This dataset is one of the Land Surface CDR Version 5 products produced by the NASA Goddard Space Flight Center (GSFC) and the University of Maryland (UMD). Other Land Surface CDR products include the Normalized Difference Vegetation Index (NDVI), Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). Scientific improvements for Version 5 include updating to higher resolution ancillary data and more accurate approaches for BRDF correction, calibration, compositing, and QA. Version 5 also corrects the data for known errors in time, latitude, and longitude variables, as well as improves the global and variable attribute definitions. The dataset is in the netCDF-4 file format following ACDD and CF Conventions. The dataset is accompanied by algorithm documentation, data flow diagram and source code for the NOAA CDR Program. VIIRS: This dataset contains gridded daily surface reflectance and brightness temperatures derived from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensors onboard NOAA polar orbiting satellites. Surface reflectance from VIIRS channels I1, I2, and I3 (at 640, 865, and 1610 nm) are a NOAA Climate Data Record (CDR). The dataset spans from 2014 to 10 days before the present, and was processed from the VIIRS 375m and 750m Earth view Sensor Data Record (SDR) datasets. VIIRS surface reflectance observations are packaged into data arrays with latitude and longitude dimensions of 3600 x 7200 covering the globe at 0.05 degree spatial resolution. This dataset is one of the Land Surface CDR products produced by the NASA Goddard Space Flight Center (GSFC) and the University of Maryland (UMD). Other Land Surface CDR products include the Normalized Difference Vegetation Index (NDVI), Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). The dataset is in the netCDF-4 file format following ACDD and CF Conventions. The dataset is accompanied by algorithm documentation, data flow diagram and source code for the NOAA CDR Program.</description>
<size>1925156633265</size>
</item><item>
<title>enwiki-20250801-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>19f6e3d1c44d4bc997cce8d2325964c28895c9cb</infohash>
<guid>https://academictorrents.com/details/19f6e3d1c44d4bc997cce8d2325964c28895c9cb</guid>
<link>https://academictorrents.com/details/19f6e3d1c44d4bc997cce8d2325964c28895c9cb</link>
<description>English Wikipedia Multistream 2025-08-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>25351930112</size>
</item><item>
<title>Stack Exchange Data Dump (2025-06-30, revision 2)</title>
<category>Dataset</category>
<infohash>53d504734619bc57bf4f4ec81fdf2a2536b3b501</infohash>
<guid>https://academictorrents.com/details/53d504734619bc57bf4f4ec81fdf2a2536b3b501</guid>
<link>https://academictorrents.com/details/53d504734619bc57bf4f4ec81fdf2a2536b3b501</link>
<description>WARNING: Stack Exchange has poisoned some of the data dumps with garbage data: https://meta.stackexchange.com/q/412018 - this is an upstream issue that cannot currently be fixed. This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2025-06-30. This revision contains one bugfix from rev. 1: one site lacked content in its posts.xml: https://meta.stackexchange.com/q/411792 The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has also been archived at https://archive.org/details/stackexchange_20250630_rev2</description>
<size>97361592423</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2025-06</title>
<category>Dataset</category>
<infohash>30dee5f0406da7a353aff6a8caa2d54fd01f2ca1</infohash>
<guid>https://academictorrents.com/details/30dee5f0406da7a353aff6a8caa2d54fd01f2ca1</guid>
<link>https://academictorrents.com/details/30dee5f0406da7a353aff6a8caa2d54fd01f2ca1</link>
<description>Reddit comments and submissions from 2005-06 to 2025-06 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps The more recent dumps are collected by u/RaiderBDev</description>
<size>3461377903467</size>
</item><item>
<title>PastVu Historical Photographs (2025-07-23)</title>
<category>Dataset</category>
<infohash>8cf94961303049e94ed82374221602aba17f6926</infohash>
<guid>https://academictorrents.com/details/8cf94961303049e94ed82374221602aba17f6926</guid>
<link>https://academictorrents.com/details/8cf94961303049e94ed82374221602aba17f6926</link>
<description>This dataset contains approximately 2,093,000 historical photographs from PastVu.com, spanning the years 1826-2000. PastVu is a collaborative project that allows users to view historical photographs on an interactive map, providing a unique temporal and geographical perspective on historical documentation. The dataset includes images downloaded at the best available resolution and is structured in webdataset format for efficient processing. The collection covers a wide range of historical subjects including architecture, street scenes, cultural events, portraits, and everyday life, primarily focused on Russia and former Soviet territories, though it includes photographs from around the world.</description>
<size>1799720013096</size>
</item><item>
<title>Zuse-Z1-Panorama-Photos</title>
<category>Dataset</category>
<infohash>74a18f91b2b72c4ee31041e3a711606a07a0d72e</infohash>
<guid>https://academictorrents.com/details/74a18f91b2b72c4ee31041e3a711606a07a0d72e</guid>
<link>https://academictorrents.com/details/74a18f91b2b72c4ee31041e3a711606a07a0d72e</link>
<description>The Z1 was the first computer built by German computer pioneer Konrad Zuse in 1936-1938 - a mechanical binary floating-point computer with free programmability. The logic gates and memory cells consist of moving and moved plates with cutouts, coupled by switching pins, grouped into layers, grouped into blocks. The original was destroyed in World War II. In the 1980s, a reconstruction was completed by Konrad Zuse himself and helpers. The camera was mounted on a Dolly and was driven around the Z1 replica in Deutsches Technisches Museum in Berlin, taking the pictures with different degrees of zooming, from three different vertical angles (pitch angle) and hundreds of points of view around the machine. After discontinuation of Adobe Flash Player, panorama viewer on zuse-z1.zib.de became unusable with modern browsers. This dataset contains image tiles of the original viewer as well as the stitched images.</description>
<size>7138293195</size>
</item><item>
<title>Audius (2025-04-25)</title>
<category>Dataset</category>
<infohash>7cfe1a9fb3ede275fe1f2fde27e290ad012479e4</infohash>
<guid>https://academictorrents.com/details/7cfe1a9fb3ede275fe1f2fde27e290ad012479e4</guid>
<link>https://academictorrents.com/details/7cfe1a9fb3ede275fe1f2fde27e290ad012479e4</link>
<description>This dataset contains metadata for 716,340 audio tracks from the Audius decentralized music platform. Audius is a blockchain-based streaming service that allows artists to share and monetize their music directly with listeners. The dataset includes detailed metadata about tracks, artists, and engagement metrics.</description>
<size>187629532</size>
</item><item>
<title>side7 (2025-04-25)</title>
<category>Dataset</category>
<infohash>4bb0f419102d7919eac6f759065b5f0ebf0b6e22</infohash>
<guid>https://academictorrents.com/details/4bb0f419102d7919eac6f759065b5f0ebf0b6e22</guid>
<link>https://academictorrents.com/details/4bb0f419102d7919eac6f759065b5f0ebf0b6e22</link>
<description>This dataset contains artwork collected from Side7. The dataset includes images along with associated metadata such as titles, descriptions, categories, ratings, and tags.</description>
<size>8848577820</size>
</item><item>
<title>Tamago (2025-04-25)</title>
<category>Dataset</category>
<infohash>5211929574baedb66c8876535d9c81eba80dbd9d</infohash>
<guid>https://academictorrents.com/details/5211929574baedb66c8876535d9c81eba80dbd9d</guid>
<link>https://academictorrents.com/details/5211929574baedb66c8876535d9c81eba80dbd9d</link>
<description>This dataset contains metadata for 1,567 music tracks from tamastream, a community-based music streaming platform built on the NEAR blockchain. The dataset includes detailed track metadata including titles, descriptions, genres, and user interactions, providing insights into independent artist communities and their music.</description>
<size>30719490198</size>
</item><item>
<title>Piczel (2025-04-25)</title>
<category>Dataset</category>
<infohash>fd377f9a290e7a7a45235389d41f5ef654de4cc5</infohash>
<guid>https://academictorrents.com/details/fd377f9a290e7a7a45235389d41f5ef654de4cc5</guid>
<link>https://academictorrents.com/details/fd377f9a290e7a7a45235389d41f5ef654de4cc5</link>
<description>This dataset contains information about image files from piczel. The dataset includes both metadata and the original media files themselves. The metadata covers 76,607 image entries, including URLs, tags, file information, upload dates and descriptions. The dataset also contains the corresponding original image and video files.</description>
<size>136366809085</size>
</item><item>
<title>Fimfiction (2025-04-25)</title>
<category>Dataset</category>
<infohash>08d162a9c91f0801549e2e3907f9b6884b5a2547</infohash>
<guid>https://academictorrents.com/details/08d162a9c91f0801549e2e3907f9b6884b5a2547</guid>
<link>https://academictorrents.com/details/08d162a9c91f0801549e2e3907f9b6884b5a2547</link>
<description>This dataset contains 815,740 user stories from Fimfiction.net, a platform dedicated to fanfiction. Each entry includes the story s title, content, and unique identifier. The writings span diverse genres, themes, and creative styles, contributed by the platform s community.</description>
<size>4076591866</size>
</item><item>
<title>FicWad (2025-04-25)</title>
<category>Dataset</category>
<infohash>d410fa33a2e5a00c275e751145b256249a625560</infohash>
<guid>https://academictorrents.com/details/d410fa33a2e5a00c275e751145b256249a625560</guid>
<link>https://academictorrents.com/details/d410fa33a2e5a00c275e751145b256249a625560</link>
<description>This dataset contains fan fiction stories collected from FicWad.com, a community platform for fan fiction authors and readers. The dataset includes complete stories with metadata such as titles, summaries, categories, ratings, genres, publication dates, and full text content.</description>
<size>391680979</size>
</item><item>
<title>CharacterHub Metadata (2025-04-25)</title>
<category>Dataset</category>
<infohash>9b782f5745dd42bf436817c0596fab8986ad6721</infohash>
<guid>https://academictorrents.com/details/9b782f5745dd42bf436817c0596fab8986ad6721</guid>
<link>https://academictorrents.com/details/9b782f5745dd42bf436817c0596fab8986ad6721</link>
<description>This dataset contains metadata for 174,158 character profiles from CharacterHub. The dataset includes metadata such as character names, descriptions, backstories, tags, attributes (like species, age, appearance), and links to profile/cover images.</description>
<size>87291679</size>
</item><item>
<title>PaperDemon Writings (2025-04-25)</title>
<category>Dataset</category>
<infohash>ed31b5544fe542a2d7bd67de0553f10a835329d1</infohash>
<guid>https://academictorrents.com/details/ed31b5544fe542a2d7bd67de0553f10a835329d1</guid>
<link>https://academictorrents.com/details/ed31b5544fe542a2d7bd67de0553f10a835329d1</link>
<description>This dataset contains creative writings collected from PaperDemon. The dataset includes stories with their chapters, titles, and full text content.</description>
<size>22537838</size>
</item><item>
<title>itaku metadata (2025-04-25)</title>
<category>Dataset</category>
<infohash>ea62baba01e5bf5df8f428e22f47c3f6b0b26534</infohash>
<guid>https://academictorrents.com/details/ea62baba01e5bf5df8f428e22f47c3f6b0b26534</guid>
<link>https://academictorrents.com/details/ea62baba01e5bf5df8f428e22f47c3f6b0b26534</link>
<description/>
<size>108170942</size>
</item><item>
<title>Artfol Metadata (2025-04-25)</title>
<category>Dataset</category>
<infohash>6160ed74f0579e7f5cbd94a2fdb816061077671d</infohash>
<guid>https://academictorrents.com/details/6160ed74f0579e7f5cbd94a2fdb816061077671d</guid>
<link>https://academictorrents.com/details/6160ed74f0579e7f5cbd94a2fdb816061077671d</link>
<description>This dataset contains metadata for 1,879,965 artwork posts from Artfol, an independent social media platform focused on artists. Each entry represents an artwork post with associated metadata including title, moderation flags, and URLs.</description>
<size>75575714</size>
</item><item>
<title>PaperDemon Art (2025-04-25)</title>
<category>Dataset</category>
<infohash>133e186dd3ff07892a23f246de77b0b419669a71</infohash>
<guid>https://academictorrents.com/details/133e186dd3ff07892a23f246de77b0b419669a71</guid>
<link>https://academictorrents.com/details/133e186dd3ff07892a23f246de77b0b419669a71</link>
<description>This dataset contains artwork collected from PaperDemon. The dataset includes artwork images along with associated metadata such as titles, posting dates, descriptions, tags, and user comments.</description>
<size>46072951212</size>
</item><item>
<title>PaintBerri Art [2025-04-25]</title>
<category>Dataset</category>
<infohash>5624a329219d0f13a1dd4dae56ca7a3d14311b29</infohash>
<guid>https://academictorrents.com/details/5624a329219d0f13a1dd4dae56ca7a3d14311b29</guid>
<link>https://academictorrents.com/details/5624a329219d0f13a1dd4dae56ca7a3d14311b29</link>
<description>This dataset contains hand-drawn artwork collected from PaintBerri. The dataset includes images along with associated metadata such as publication dates, titles, descriptions, and dimensions.</description>
<size>17202941697</size>
</item><item>
<title>Archive Of Our Own [12.6 mil] (2025-04-25)</title>
<category>Dataset</category>
<infohash>51c21fd1ae2896d6d5307347960da059236e6bd9</infohash>
<guid>https://academictorrents.com/details/51c21fd1ae2896d6d5307347960da059236e6bd9</guid>
<link>https://academictorrents.com/details/51c21fd1ae2896d6d5307347960da059236e6bd9</link>
<description>This dataset contains approximately 12.6 million publicly available works from AO3. The dataset was created by processing works with IDs from 1 to 63,200,000 that are publicly accessible. Each entry contains the full text of the work along with comprehensive metadata including title, author, fandom, relationships, characters, tags, warnings, and other classification information.</description>
<size>163802124885</size>
</item><item>
<title>Stack Exchange Data Dump (2025-06-30)</title>
<category>Dataset</category>
<infohash>7c8c9a8ffff4d962e052674e236ea0b7390cd9c0</infohash>
<guid>https://academictorrents.com/details/7c8c9a8ffff4d962e052674e236ea0b7390cd9c0</guid>
<link>https://academictorrents.com/details/7c8c9a8ffff4d962e052674e236ea0b7390cd9c0</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2025-06-30. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has also been archived at https://archive.org/details/stackexchange_20250630</description>
<size>97339542034</size>
</item><item>
<title>Computational NeuroScience</title>
<category>Course</category>
<infohash>4a16e5c3b12ad16246ff773337851ddca669d0a5</infohash>
<guid>https://academictorrents.com/details/4a16e5c3b12ad16246ff773337851ddca669d0a5</guid>
<link>https://academictorrents.com/details/4a16e5c3b12ad16246ff773337851ddca669d0a5</link>
<description>There are 8 modules in this course: This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of vision, sensory-motor control, learning, and memory. Specific topics that will be covered include representation of information by spiking neurons, processing of information in neural networks, and algorithms for adaptation and learning. We will make use of Matlab/Octave/Python demonstrations and exercises to gain a deeper understanding of concepts and methods introduced in the course. The course is primarily aimed at third- or fourth-year undergraduates and beginning graduate students, as well as professionals and distance learners interested in learning how the brain processes information.</description>
<size>1000642215</size>
</item><item>
<title>[Apna College] Alpha Batch 5.0 | Complete Placement Course</title>
<category>Course</category>
<infohash>25eee12586a360305ace93346f385630a7cb2774</infohash>
<guid>https://academictorrents.com/details/25eee12586a360305ace93346f385630a7cb2774</guid>
<link>https://academictorrents.com/details/25eee12586a360305ace93346f385630a7cb2774</link>
<description>[Apna College] Alpha Batch 5.0 | Complete Placement Course https://onehack.us/uploads/default/original/3X/3/a/3ac276021c0c79c7cfb76ec13022eb7237af0cec.jpeg Master coding, DSA, and placement skills with Apna College Alpha Batch 5.0. &gt; Become placement-ready in just 3.5 months. &gt; Learn from industry experts through structured videos and live mentorship. &gt; Join thousands who ve landed jobs in top-tier companies. &gt; Official website:  apnacollege.in/course/alpha-batch-5  &amp;mdash;- # 1,339 files (25.4 GB) comprehensive material included. https://onehack.us/uploads/default/original/3X/e/5/e56f8a24aa41e85303d44335a3f3be64559106bb.png https://onehack.us/uploads/default/original/3X/3/d/3d22896ee92155802f0baa18c4cf3bcbe9b153b7.png</description>
<size>27292703903</size>
</item><item>
<title>Reddit comments/submissions 2025-06</title>
<category>Dataset</category>
<infohash>bec5590bd3bc6c0f2d868f36ec92bec1aff4480e</infohash>
<guid>https://academictorrents.com/details/bec5590bd3bc6c0f2d868f36ec92bec1aff4480e</guid>
<link>https://academictorrents.com/details/bec5590bd3bc6c0f2d868f36ec92bec1aff4480e</link>
<description>Reddit comments and submissions from 2025-06 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>56457099285</size>
</item><item>
<title>gRPC Masterclass with Java &amp; Spring Boot</title>
<category>Course</category>
<infohash>1e5e059497095699c2c063effaecd6ee0859dc3e</infohash>
<guid>https://academictorrents.com/details/1e5e059497095699c2c063effaecd6ee0859dc3e</guid>
<link>https://academictorrents.com/details/1e5e059497095699c2c063effaecd6ee0859dc3e</link>
<description># Master gRPC for High-Performance Microservices with Spring Boot &amp; Protocol Buffers **Requirements** - Knowledge of Java 8 or above - Comfortable with Indian accent (for course delivery) &amp;mdash;- Empower yourself to build next-generation microservices with this comprehensive gRPC course. Learn how to leverage **Protocol Buffers** and **Spring Boot** to create scalable, efficient, and performant applications. ## 🚀 Key Takeaways - **Master Protocol Buffers** - Gain a solid understanding of Google’s language-neutral data format for seamless data exchange. - **Unlock gRPC Potential** - Explore the benefits of gRPC for microservices communication, achieving up to **10x faster performance** compared to REST APIs. - **Demystify RPCs** - Dive deep into different RPC types (Unary, Streaming) and their effective implementation. - **Conquer Load Balancing** - Address load balancing challenges and implement effective strategies for optimal performance. - **Secure Your Services** - Implement robust authentication mechanisms using user and service tokens. - **Handle Errors Confidently** - Master error handling techniques with gRPC and Protobuf  OneOf . - **Spring Boot Integration** - Seamlessly integrate gRPC with Spring Boot for efficient microservice development. - **Best Practices &amp; Real-World Insights** - Learn industry best practices and overcome real-world challenges in gRPC deployments. &amp;mdash;- ## 🎯 Why Join This Course? Unlock the power of gRPC to build **highly performant, scalable, and efficient microservices** using **Protocol Buffers** and **Spring Boot**. Gain the skills necessary to: - Overcome common challenges in microservices communication - Achieve superior performance - Streamline your API design with gRPC &amp;mdash;- Start mastering gRPC today and take your microservices architecture to the next level!</description>
<size>7111697934</size>
</item><item>
<title>Apache Kafka for Developers using Spring Boot</title>
<category>Course</category>
<infohash>8d6bce26eb6e625e9f2baaae0fd1fffbdcc7d77a</infohash>
<guid>https://academictorrents.com/details/8d6bce26eb6e625e9f2baaae0fd1fffbdcc7d77a</guid>
<link>https://academictorrents.com/details/8d6bce26eb6e625e9f2baaae0fd1fffbdcc7d77a</link>
<description># Kafka with Spring Boot Course ## Requirements - Java 11 or greater is required - IntelliJ, Eclipse, or a similar IDE - Knowledge about Spring Boot - Experience writing tests using JUnit - Gradle or Maven knowledge is needed ## Description This course is structured to give you both theoretical and coding experience with **Apache Kafka using Spring Boot**. It is targeted at developers who want to build **enterprise-standard Kafka client applications** using Spring Boot. If you re looking to learn: - Use cases where Kafka fits really well - Internals of Kafka and how it works - Building enterprise-standard Kafka client applications (Producer/Consumer API) using Spring Boot - Writing unit/integration tests for Kafka client applications 👉 **This is the right course for you.** This is a pure hands-on oriented course — you ll learn concepts through code. By the end of this course, you will have a complete understanding of coding and implementing Kafka clients using Spring Boot with the Producer/Consumer API. &amp;mdash;- ## Course Content ### Getting Started with Kafka - Quick introduction to Apache Kafka: terminologies and different client APIs - Download and install Kafka ### Understanding Kafka Components and Internals (Theory + Hands-On) - Kafka internals - Topics and partitions - Set up a local Kafka cluster with multiple brokers - Producer/consumer messages in the Kafka cluster - Consumer offsets and consumer groups - Commit log and retention policy - Kafka load distribution, fault tolerance, and robustness &amp;mdash;- ### Application Overview - Overview of the application you ll build during the course &amp;mdash;- ### Build Spring Boot Kafka Producer - Hands-On - Build a Kafka producer using Spring Boot - Build REST API for posting events into the application - Use  KafkaTemplate  to publish data into Kafka topics - Different approaches to produce messages - Publish Kafka records using headers &amp;mdash;- ### Integration Testing Using JUnit5 - Hands-On - Integration tests using Embedded Kafka - Test API interactions - Test Embedded Kafka interactions &amp;mdash;- ### Unit Testing Using JUnit5 - Hands-On - Unit tests for Kafka producer - Controller layer tests using  @WebMvcTest  and  MockMvc  - Request payload validations - Custom error handler for different response codes &amp;mdash;- ### Kafka Producer - Sending Messages with Key - Send records to Kafka topics with a key ### Kafka Producer - Important Configurations - Key configurations for reliable message delivery &amp;mdash;- ### Build Spring Boot Kafka Consumer - Hands-On - Build a Kafka consumer using Spring Boot - Set up base consumer project - Learn Spring Kafka terminologies for consumer configuration - Configure consumers with  @KafkaListener  - Understand Spring Boot auto-configuration for Kafka consumers &amp;mdash;- ### Consumer Groups and Offset Management - Hands-On - Consumer groups and offset management - Scalable message consumption and consumer rebalance - Default and manual offset management - Increase concurrency for scalable consumption &amp;mdash;- ### Persisting Library Events in DB (H2 In-Memory Database) - Integrate DB layer using Spring JPA - Configure H2 in-memory DB - Create  LibraryEvent  and  Book  entities - Build service layer for ADD/MODIFY event types &amp;mdash;- ### Integration Testing Kafka Consumer (Embedded Kafka) - Configure Embedded Kafka for integration tests - Test posting NEW and UPDATE library events - Integration tests for real DBs using TestContainers &amp;mdash;- ### Error Handling, Retry, and Recovery - Kafka Consumer - Custom error handler - Retry strategies - Retry specific exceptions using custom  RetryPolicy  - Recovery handling &amp;mdash;- ### Error Handling, Retry, and Recovery - Kafka Producer - Error handling in producer - Retry when broker not available - Retry when  min.insync.replicas  not met - Retain/recover failed records &amp;mdash;- ## Final Outcome By the end of this course, you ll have **complete understanding and knowledge of building enterprise-standard Kafka producers and consumers using Spring Boot**, including unit and integration tests with Embedded Kafka.</description>
<size>3610749189</size>
</item><item>
<title>noaa-global-surface-temperature</title>
<category>Dataset</category>
<infohash>f98189ad498d773acd831b590dd8b1612cf6111d</infohash>
<guid>https://academictorrents.com/details/f98189ad498d773acd831b590dd8b1612cf6111d</guid>
<link>https://academictorrents.com/details/f98189ad498d773acd831b590dd8b1612cf6111d</link>
<description>The NOAA Merged Land Ocean Global Surface Temperature Analysis (NOAAGlobalTemp, formerly known as MLOST) combines long-term sea surface (water) temperature (SST) and land surface (air) temperature datasets to create a complete, accurate depiction of global temperature trends. The dataset is used to support climate monitoring activities such as the Monthly Global Climate Assessment, and also provides input data for a number of climate models. Methodology To achieve global temperature coverage, NOAAGlobalTemp combines the sea surface temperature (SST) with land surface air temperature (LSAT). Data is available as a series of temperature anomalies relative to a 1971–2000 monthly climatology. To compute anomalies relative to climatologies for other time periods (e.g. 1991–2020), calculate the average of the NOAAGlobalTemp anomalies for that time period (e.g. 1991–2020), and then subtract the average from the original anomalies.</description>
<size>2598745640</size>
</item><item>
<title>Build Reactive Microservices using Spring WebFlux SpringBoot</title>
<category>Course</category>
<infohash>03519f3ce52dbf100c444d42106f05fa7f4cdb69</infohash>
<guid>https://academictorrents.com/details/03519f3ce52dbf100c444d42106f05fa7f4cdb69</guid>
<link>https://academictorrents.com/details/03519f3ce52dbf100c444d42106f05fa7f4cdb69</link>
<description>🚀 Prerequisites JDK 8 or higher Any IDE (IntelliJ, Eclipse, etc.) Spring Boot knowledge (mandatory to benefit from the course) 📚 Course Overview This course provides both theory and hands-on coding to master Reactive Programming and Reactive RESTful APIs using Spring WebFlux. You will: Understand Reactive Programming concepts Write reactive code with Spring WebFlux and databases (MongoDB) Build Reactive REST APIs (annotated and functional style) Handle errors and exceptions in reactive applications Stream real-time data using Server-Sent Events (SSE) 📝 What You’ll Learn 1️⃣ Introduction &amp; Motivation Why Reactive Programming? Need for reactive programming Limitations of Spring MVC Spring MVC concurrency model What is Reactive Programming? Basics with simple examples Introduction to Reactive Streams specification Overview of popular Reactive libraries 2️⃣ Project Reactor Fundamentals of Project Reactor Reactive types: Flux and Mono Hands-on examples with Flux &amp; Mono JUnit testing with Flux &amp; Mono Operators in Reactor 3️⃣ Spring WebFlux API Development 🌟 Annotated Controllers Build first non-blocking REST API Return Flux / Mono from endpoints JUnit tests with WebTestClient 🌟 Functional Web Module Build non-blocking API with RouterFunction and HandlerFunction JUnit tests with WebTestClient 🌟 Under the Hood Spring WebFlux &amp; Netty execution model Netty concepts: Channel, EventLoop 💾 Reactive Programming with Databases Configure reactive MongoDB Define MongoDB Item document Setup reactive MongoDB adapter JUnit tests for reactive repository 🔧 CRUD API Development Build Item CRUD API using @RestController Build Item CRUD API using Functional Web Automated tests using WebTestClient 🌐 Reactive Client Build non-blocking client using WebClient Perform GET, POST, PUT, DELETE Techniques with exchange() and retrieve() Handle exceptions in WebClient ⚠️ Exception Handling RestController Handle errors with @ExceptionHandler &amp; @ControllerAdvice JUnit tests for exception scenarios Functional Web Handle errors with WebExceptionHandler JUnit tests for exception scenarios WebClient Handle exceptions with exchange() &amp; retrieve() 📡 Streaming Real-Time Data (SSE) Build streaming endpoint using SSE Use tailable cursors &amp; capped collections in MongoDB Use @Tailable for non-blocking streaming Write automated tests for SSE endpoints</description>
<size>2704142203</size>
</item><item>
<title>ruwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>a808c21cd92487fa9a28c238b88a13dcc7ebaf4e</infohash>
<guid>https://academictorrents.com/details/a808c21cd92487fa9a28c238b88a13dcc7ebaf4e</guid>
<link>https://academictorrents.com/details/a808c21cd92487fa9a28c238b88a13dcc7ebaf4e</link>
<description>Это дамп русскоязычной Википедии, полученный 2025-06-01, в многопоточном формате. Для получения дополнительной информации см. [1]. Этот контент лицензирован в соответствии с лицензией Creative Commons Attribution Share-Alike версии 4.0 (CC-BY-SA 4.0). Более подробную информацию можно найти на [2]. Разрешение на использование и распространение, в том числе через BitTorrent, предоставляется в соответствии с Условиями использования Wikimedia Foundations, доступными по адресу [3]. Дополнительная информация об авторских правах доступна на [4], которая также может быть полезной. This is a dump of the Russian language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>5858685246</size>
</item><item>
<title>jawiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>62b086e1ecb4c231af09683be375adba34f1c381</infohash>
<guid>https://academictorrents.com/details/62b086e1ecb4c231af09683be375adba34f1c381</guid>
<link>https://academictorrents.com/details/62b086e1ecb4c231af09683be375adba34f1c381</link>
<description>これは、2025年6月1日に取得された日本語版Wikipediaのマルチストリーム形式のダンプです。詳細については、[1]を参照してください。 $ n このコンテンツは、クリエイティブ・コモンズ 表示・継承ライセンス バージョン4.0（CC-BY-SA 4.0）の下でライセンスされています。詳細については、[2]を参照してください。 $ n 使用および配布（BitTorrent経由を含む）は、[3]に記載されているウィキメディア財団の利用規約に基づいて許可されています。[4]には、参考になる可能性のある追加の著作権情報があります。 This is a dump of the Japanese language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>4530521499</size>
</item><item>
<title>ptwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>bdeab08ba4e95049b6b31b591f761847fc23562c</infohash>
<guid>https://academictorrents.com/details/bdeab08ba4e95049b6b31b591f761847fc23562c</guid>
<link>https://academictorrents.com/details/bdeab08ba4e95049b6b31b591f761847fc23562c</link>
<description>Trata-se de um dump da Wikipédia em português, obtido em 01/06/2025, em formato multistream. Para mais informações, consultar [1]. Este conteúdo está licenciado sob a licença Creative Commons Atribuição-PartilhaIgual versão 4.0 (CC-BY-SA 4.0). Mais informação pode ser encontrada em [2]. A permissão para utilizar e distribuir - incluindo via BitTorrent - é concedida de acordo com os Termos de Utilização da Wikimedia Foundations em [3]. Existem informações adicionais sobre direitos de autor disponíveis em [4] que também podem ser informativas. This is a dump of the Portugese language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>2574679468</size>
</item><item>
<title>plwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>d017be591fc7fd51d217de9307eec503b12c5648</infohash>
<guid>https://academictorrents.com/details/d017be591fc7fd51d217de9307eec503b12c5648</guid>
<link>https://academictorrents.com/details/d017be591fc7fd51d217de9307eec503b12c5648</link>
<description>Jest to eksport polskojęzycznej Wikipedii, pobrany 2025-06-01, w formacie multistream. Więcej informacji można znaleźć w [1]. Ta treść jest licencjonowana na podstawie licencji Creative Commons Attribution Share-Alike w wersji 4.0 (CC-BY-SA 4.0). Więcej informacji można znaleźć w [2]. Zezwolenie na użytkowanie i dystrybucję – w tym za pośrednictwem protokołu BitTorrent – udzielane jest zgodnie z Warunkami korzystania z usługi Wikimedia Foundations na stronie [3]. Dodatkowe informacje na temat praw autorskich, które mogą okazać się przydatne, można znaleźć pod adresem [4]. This is a dump of the Polish language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>2730072604</size>
</item><item>
<title>itwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>8d6ce555e1b6fcb4060c6b71ee5327431e2ec238</infohash>
<guid>https://academictorrents.com/details/8d6ce555e1b6fcb4060c6b71ee5327431e2ec238</guid>
<link>https://academictorrents.com/details/8d6ce555e1b6fcb4060c6b71ee5327431e2ec238</link>
<description>Questo è un dump della Wikipedia in lingua italiana, effettuato il 01/06/2025, in formato multistream. Per ulteriori informazioni, consultare [1]. Questo contenuto è concesso in licenza con licenza Creative Commons Attribuzione-Condividi allo stesso modo versione 4.0 (CC-BY-SA 4.0). Ulteriori informazioni sono disponibili all indirizzo [2]. L autorizzazione all uso e alla distribuzione, anche tramite BitTorrent, è concessa in base ai Termini d uso della Wikimedia Foundation all indirizzo [3]. Ulteriori informazioni sul copyright sono disponibili all indirizzo [4], che potrebbero essere utili. This is a dump of the Italian language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>4192513459</size>
</item><item>
<title>frwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>63a083c3b5e0ecc6fd524f0c699853c4a45a4054</infohash>
<guid>https://academictorrents.com/details/63a083c3b5e0ecc6fd524f0c699853c4a45a4054</guid>
<link>https://academictorrents.com/details/63a083c3b5e0ecc6fd524f0c699853c4a45a4054</link>
<description>Il s agit d un dump de la Wikipédia en langue française, pris le 01/06/2025, au format multiflux. Pour plus d informations, veuillez consulter [1]. Ce contenu est sous licence Creative Commons Attribution Partage dans les Mêmes Conditions version 4.0 (CC-BY-SA 4.0). Plus d informations sont disponibles sur [2]. L autorisation d utilisation et de distribution, y compris via BitTorrent, est accordée conformément aux conditions d utilisation de la Fondation Wikimedia, disponibles sur [3]. Des informations supplémentaires sur les droits d auteur sont disponibles à l adresse [4] qui peuvent également être informatives. This is a dump of the French language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>6821495711</size>
</item><item>
<title>eswiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>13329bab40a1456fad8f7f7f5bd04efda72ae12a</infohash>
<guid>https://academictorrents.com/details/13329bab40a1456fad8f7f7f5bd04efda72ae12a</guid>
<link>https://academictorrents.com/details/13329bab40a1456fad8f7f7f5bd04efda72ae12a</link>
<description>Este es un volcado de la Wikipedia en español, tomado el 01/06/2025, en formato multistream. Para más información, consulte [1]. Este contenido está licenciado bajo la licencia Creative Commons Atribución-CompartirIgual versión 4.0 (CC-BY-SA 4.0). Puede encontrar más información en [2]. El permiso para usar y distribuir, incluso a través de BitTorrent, se concede según los Términos de Uso de la Fundación Wikimedia en [3]. Hay información adicional sobre derechos de autor disponible en [4] que también puede ser informativa. This is a dump of the Spanish language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>5013958579</size>
</item><item>
<title>dewiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>1b9bfc16bff664233f2affc474d40bbf0b6213e1</infohash>
<guid>https://academictorrents.com/details/1b9bfc16bff664233f2affc474d40bbf0b6213e1</guid>
<link>https://academictorrents.com/details/1b9bfc16bff664233f2affc474d40bbf0b6213e1</link>
<description>Dies ist ein Dump der deutschsprachigen Wikipedia vom 01.06.2025 im Multistream-Format. Weitere Informationen finden Sie unter [1]. Dieser Inhalt ist unter der Creative Commons Attribution Share-Alike-Lizenz Version 4.0 (CC-BY-SA 4.0) lizenziert. Weitere Informationen finden Sie unter [2]. Die Erlaubnis zur Nutzung und Verbreitung – auch über BitTorrent – ​​erfolgt gemäß den Nutzungsbedingungen der Wikimedia Foundation unter [3]. Weitere Informationen zum Copyright finden Sie unter [4]. This is a dump of the German language Wikipedia, taken 2025-06-01, in multistream format. For more information please see [1]. This content is licenced under the Creative Commons Attribution Share-Alike license version 4.0 (CC-BY-SA 4.0). More information may be found at [2]. Permission to use and distribute - Including via BitTorrent - Is granted under the Wikimedia Foundations Terms of Use at [3]. There is additional copyright information available at [4] that may also be informative. [1] https://wikipedia.org/wiki/WP:Download [2] https://creativecommons.org/licenses/by-sa/4.0/ [3] https://wikimediafoundation.org/wiki/Terms_of_Use [4] https://archive.org/download/wikimediadownloads/legal.html</description>
<size>7795225491</size>
</item><item>
<title>noaa-ncei-climate-normals-annual-monthly-daily-hourly</title>
<category>Dataset</category>
<infohash>e49ada3218036c3a5523b8f02e59955205f6a59a</infohash>
<guid>https://academictorrents.com/details/e49ada3218036c3a5523b8f02e59955205f6a59a</guid>
<link>https://academictorrents.com/details/e49ada3218036c3a5523b8f02e59955205f6a59a</link>
<description>The U.S. Climate Normals are a large suite of data products that provide information about typical climate conditions for thousands of locations across the United States. Normals act both as a ruler to compare todayâs weather and tomorrowâs forecast, and as a predictor of conditions in the near future. The official normals are calculated for a uniform 30 year period, and consist of annual/seasonal, monthly, daily, and hourly averages and statistics of temperature, precipitation, and other climatological variables from almost 15,000 U.S. weather stations. NCEI generates the official U.S. normals every 10 years in keeping with the needs of our user community and the requirements of the World Meteorological Organization (WMO) and National Weather Service (NWS). The 1991â2020 U.S. Climate Normals are the latest in a series of decadal normals first produced in the 1950s. They were first released in May 2021 (v1.0.0), and the statistics for 23 of the sites were reissued in 2023 (v1.0.1). These data allow travelers to pack the right clothes, farmers to plant the best crop varieties, and utilities to plan for seasonal energy usage. Many other important economic decisions that are made beyond the predictive range of standard weather forecasts are either based on or influenced by climate normals. Monthly gridded climate normals are available for the contiguous U.S., see the Gridded Normals tab for more information.</description>
<size>4195242104</size>
</item><item>
<title>FishTrack23: An Ensemble Underwater Dataset for Multi-Object Tracking</title>
<category>Dataset</category>
<infohash>70695b973afa53be67dbfb72a2478775885598b9</infohash>
<guid>https://academictorrents.com/details/70695b973afa53be67dbfb72a2478775885598b9</guid>
<link>https://academictorrents.com/details/70695b973afa53be67dbfb72a2478775885598b9</link>
<description>Tracking fish in optical underwater imagery contains a number of challenges not encountered in terrestrial domains. Video may contain large schools comprised of many individuals, dynamic natural backgrounds, variable target scales, volatile collection conditions, and non-fish moving confusors including debris, marine snow, and other organisms. Lastly, there is a lack of large public datasets for algorithm evaluation available in this domain. FishTrack aims to address these challenges by providing a large quantity of expert-annotated fish groundtruth tracks, in imagery and video collected across a range of different backgrounds, locations, collection conditions, and organizations.</description>
<size>53004861167</size>
</item><item>
<title>Reddit comments/submissions 2025-05</title>
<category>Dataset</category>
<infohash>186a0f85a52ff4f1b08677cd312423ace9b34976</infohash>
<guid>https://academictorrents.com/details/186a0f85a52ff4f1b08677cd312423ace9b34976</guid>
<link>https://academictorrents.com/details/186a0f85a52ff4f1b08677cd312423ace9b34976</link>
<description>Reddit comments and submissions from 2025-05 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>56891545099</size>
</item><item>
<title>noaa-ncei-hydrological-properties</title>
<category>Dataset</category>
<infohash>afe8462fd46ad9e9c54978ca01e474434710d462</infohash>
<guid>https://academictorrents.com/details/afe8462fd46ad9e9c54978ca01e474434710d462</guid>
<link>https://academictorrents.com/details/afe8462fd46ad9e9c54978ca01e474434710d462</link>
<description>The NOAA Hydrological Properties for Applications Thematic Climate Data Record (TCDR) consist of Advanced Microwave Sounding Unit-A (AMSU-A), Advanced Microwave Sounding Unit-B (AMSU-B) and Microwave Humidity Sounder (MHS) data to help with the long term monitoring of the global water cycle. The data cover a time period from 1998 to 2010, at roughly a 48 km (AMSU-A) and a 16 km resolution (AMSU-B/MHS) resolution over the entire globe with 30 (AMSU-A) and 90 (AMSU-B/MHS) observations per scan. Visual inspections and verification of the various corrections were applied to the data to improve data accuracy. AMSU-A TCDR variables consist of Total Precipitable Water (TPW), Cloud Liquid Water (CLW), Sea-Ice concentration (SIC), Land surface temperature (LST), Land surface emissivity (23, 31, 50 GHz) (LSE). AMSU-B/MHS TCDR variables consist of Ice water path (IWP), rain rate (RR), snow cover (SC) and snow water equivalent (SWE). The data are ideal for helping with things like validating climate model simulations; identifying climate extremes; validating other observations and more.</description>
<size>582935465840</size>
</item><item>
<title>us-va-open-data-bulk-download</title>
<category>Dataset</category>
<infohash>8857f112c317757ddd93e8f1849412b7ee1c9273</infohash>
<guid>https://academictorrents.com/details/8857f112c317757ddd93e8f1849412b7ee1c9273</guid>
<link>https://academictorrents.com/details/8857f112c317757ddd93e8f1849412b7ee1c9273</link>
<description>Contains roughly 1200 datasets from the VA Open Data catalog, as made available through the data.json. Excludes datasets that have no public downloads. Also includes a _failures.csv for download links that lead to 404 errors, for posterity. About Open Data (from site): Open data is VA data that is freely available to the public. It is a by-product of the work the VA does for Veterans, and is not personal data (names, addresses, birthplace, etcâ¦). The idea of open data is that public data should be easily accessible and usable by anyone to create products like web or mobile apps, infographics, or stories - the sky is really the limit. For years, government data has made it possible for innovators and entrepreneurs to create products of value for the American people (if you have ever used a GPS you have benefited from one of these products). We want to keep this tradition going. Packed with experimental SciOp CLI Pack command.</description>
<size>7465867116</size>
</item><item>
<title>enwiki-20250601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>21dc1fcba351758654f37fb904e629d143dc2a9f</infohash>
<guid>https://academictorrents.com/details/21dc1fcba351758654f37fb904e629d143dc2a9f</guid>
<link>https://academictorrents.com/details/21dc1fcba351758654f37fb904e629d143dc2a9f</link>
<description>English Wikipedia Multistream 2025-06-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>25107200863</size>
</item><item>
<title>noaa-ncei-us-radiosonde-bufr</title>
<category>Dataset</category>
<infohash>1d06d12df6e1e8cd3e01aaa0e879b7802c54f7e4</infohash>
<guid>https://academictorrents.com/details/1d06d12df6e1e8cd3e01aaa0e879b7802c54f7e4</guid>
<link>https://academictorrents.com/details/1d06d12df6e1e8cd3e01aaa0e879b7802c54f7e4</link>
<description>The Binary Universal Form for the Representation of meteorological data (BUFR) is a binary data format maintained by the World Meteorological Organization (WMO). In 2015 part of the US upper air stations began to include the high resolution radiosonde measurement in their data package sent to the NCEI. These high resolution BUFR files have names as Cnnn, where nnn represents ascension number. The BUFR includes 1) metadata: station information, instrument information, balloon release information; 2) up to 1-second observations: elapsed time, level type, location displacement, pressure, height, temperature, dew point temperature, wind speed, wind direction. Time coverage is September 2015 to present, spatial coverage is US CONUS, Alaska, Hawaii, and territories.</description>
<size>268466962597</size>
</item><item>
<title>noaa-ncei-national-marine-sanctuary-program</title>
<category>Dataset</category>
<infohash>74c3ac29ebe1ba5b9cbfddb4fc77a6d1c1489983</infohash>
<guid>https://academictorrents.com/details/74c3ac29ebe1ba5b9cbfddb4fc77a6d1c1489983</guid>
<link>https://academictorrents.com/details/74c3ac29ebe1ba5b9cbfddb4fc77a6d1c1489983</link>
<description>Contains data gathered for the National Marine Sanctuary Program, including: BML data- This collection includes netCDF format data collected by Cordell Bank National Marine Sanctuary (CBNMS), Farallones National Marine Sanctuary (GFNMS), and Bodega Marine Laboratory (BML) to understand the physical processes at Cordell Bank (CBNMS), Bodega Head (GFNMS), Southeast Farallon Island (GFNMS), and Double Point (GFNMS), and their potential effects on marine ecology. WCOSS data- This data set includes temperature and current data from moorings deployed under the NOAA National Marine Sanctuary Program s West Coast Observation System (WCOS). All moorings are in relatively shallow water (100 m or less, with most in approximately 20 or 15 m). All are equipped with a vertical array of thermistors. Channel Island NMS moorings are additionally equipped with bottom-mounted ADCPs. The thermistors are programmed with a two-minute sampling interval (no internal averaging); ADCPs are programmed with continuous two-minute ensembles (45 pings per ensemble) and a vertical resolution of one meter. The data were collected at 37 stations within the boundaries of four National Marine Sanctuaries on the West Coast of the United States, between 2004 and 2011. The data are represented in a COARDS profile netCDF format (IOOS standard format), and are accompanied by metadata in FGDC standard format, otherwise known as the Content Standard for Digital Geospatial Metadata (CSDGM).</description>
<size>388133133</size>
</item><item>
<title>nist-national-vulnerability-database-cve-cpe-cwe</title>
<category>Dataset</category>
<infohash>fe623a0bbd13e8f152ea2317f151d8d3719ba96b</infohash>
<guid>https://academictorrents.com/details/fe623a0bbd13e8f152ea2317f151d8d3719ba96b</guid>
<link>https://academictorrents.com/details/fe623a0bbd13e8f152ea2317f151d8d3719ba96b</link>
<description>Contains all CVE, CPE and CWE records from the National Vulnerability Database as of 5/29/2025. The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security related software flaws, product names, and impact metrics. Originally created in 1999 (called Internet - Categorization of Attacks Toolkit or ICAT), the NVD has undergone multiple iterations and improvements and will continue to do so to deliver its services. The NVD is a product of the NIST Computer Security Division, Information Technology Laboratory. The NVD performs enrichment on CVEs that have been published to the CVE List. NVD staff are tasked with enrichment of CVEs by aggregating data points from the description, references supplied and any supplemental data that can be found publicly at the time. This enrichment results in association impact metrics (Common Vulnerability Scoring System - CVSS), vulnerability types (Common Weakness Enumeration - CWE), and applicability statements (Common Platform Enumeration - CPE), as well as other pertinent metadata. The NVD does not actively perform vulnerability testing, relying on vendors, third party security researchers and vulnerability coordinators to provide information that is then used to assign these attributes. As additional information becomes available CVSS assessments, CWEs, and applicability statements are subject to change. The NVD endeavors to re-assess CVEs that have been amended as time and resources allow to ensure that the information offered is up to date.</description>
<size>1458628411</size>
</item><item>
<title>noaa-ncei-ratpac</title>
<category>Dataset</category>
<infohash>73c6aeaa695cd5c2d1779d5f26a45c56180083ee</infohash>
<guid>https://academictorrents.com/details/73c6aeaa695cd5c2d1779d5f26a45c56180083ee</guid>
<link>https://academictorrents.com/details/73c6aeaa695cd5c2d1779d5f26a45c56180083ee</link>
<description>The Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) are a collection of radiosonde-based temperature anomaly time series for 1958âpresent. These products address the temporal inhomogeneities created by changes to instruments and observing practices, and reduce them as much as possible. RATPAC time series were developed in the early 2000s, and are based on data from 85 weather balloon stations distributed around global land areas. Data are available on 13 atmospheric pressure levels and for three atmospheric layers between the Earth s surface and lower stratosphere. This product is recommended for assessing long-term changes in tropospheric and lower stratospheric temperatures on large spatial scales. For other uses, please refer to the Integrated Global Radiosonde Archive (IGRA).</description>
<size>26138816</size>
</item><item>
<title>noaa-ncei-snowstorm-database</title>
<category>Dataset</category>
<infohash>ca3f9f1b95df0c3b6f0c45c7c640d8ce8f6de6a9</infohash>
<guid>https://academictorrents.com/details/ca3f9f1b95df0c3b6f0c45c7c640d8ce8f6de6a9</guid>
<link>https://academictorrents.com/details/ca3f9f1b95df0c3b6f0c45c7c640d8ce8f6de6a9</link>
<description>The Snowstorm Database is a collection of over 500 snowstorms dating back to 1900 and updated operationally. Only storms having large areas of heavy snowfall (10-20 inches or greater) are included. The spatial extent includes the contiguous U.S. but the most storms are in the eastern two thirds of the U.S. This is the only comprehensive data set with starting and ending dates along with daily and total storm snowfall for large snowstorms from 1900 to the present. The data is archived in shapefile format, one shapefile per storm. Shapefiles are a non-proprietary spatial format widely used in Geographical Information Systems (GIS). Each shapefile contains daily and storm total snowfall for weather stations that were affected by the snowstorm. The snowfall data comes from the Global Historical Climatological Network - Daily (GHCN-D).</description>
<size>65807719</size>
</item><item>
<title>noaa-ncei-coral-reef-temperature-anomaly-database</title>
<category>Dataset</category>
<infohash>7a57460a4810b89fb916d48fad6e609a0ba1b64a</infohash>
<guid>https://academictorrents.com/details/7a57460a4810b89fb916d48fad6e609a0ba1b64a</guid>
<link>https://academictorrents.com/details/7a57460a4810b89fb916d48fad6e609a0ba1b64a</link>
<description>Version 6 of the Coral Reef Temperature Anomaly Database (CoRTAD) is a global, 4 km, sea surface temperature (SST) and related thermal stress metrics dataset for 1982-01-02 to 2022-12-30. The CoRTAD contains weekly-averaged SSTs, SST anomaly (SSTA, weekly SST minus weekly climatological SST), thermal stress anomaly (TSA, weekly SST minus the maximum weekly climatological SST), SSTA Degree Heating Week (SSTA_DHW, sum of previous 12 weeks when SSTA is greater than or equal to 1 degree C), SSTA Frequency (number of times over previous 52 weeks that SSTA is greater than or equal to 1 degree C), TSA DHW (TSA_DHW, also known as a Degree Heating Week, sum of previous 12 weeks when TSA is greater than or equal to 1 degree C), and TSA Frequency (number of times over previous 52 weeks that TSA is greater than or equal to 1 degree C). In addition, the CoRTAD includes ancillary sea ice concentration and marine wind speed data. These data are the sixth in the series of CoRTAD datasets originally created in association with Elizabeth Selig (Director, Marine Science, Conservation International) and John Bruno (University of North Carolina; UNC - Chapel Hill). This dataset is based on the Pathfinder V5.3 dataset, and is created with support from the NOAA Coral Reef Conservation Program.</description>
<size>332370946551</size>
</item><item>
<title>noaa-ncei-argo-gadr-gdac-data</title>
<category>Dataset</category>
<infohash>3f4dc4c18816475ec608589c5141372576a80898</infohash>
<guid>https://academictorrents.com/details/3f4dc4c18816475ec608589c5141372576a80898</guid>
<link>https://academictorrents.com/details/3f4dc4c18816475ec608589c5141372576a80898</link>
<description>The Argo Ocean Profiling Network is a global ocean observing system developed to address the lack of data coverage in parts of the world ocean, as well as the need for regular capture intervals to enable both short and long-term climate predictions. NCEI operates and manages the Global Argo Data Repository (GADR), which provides long term archive services to store and preserve data. NCEI also implements reanalysis updates and corrections provided by the U.S. Global Ocean Data Assimilation Experiment (GODAE) and French Institute for Research and Exploration of the Sea (IFREMER) Global Data Assembly Centers (GDACs), which provide access to real-time and near real-time data and perform initial quality control measures. Contains both GADR and GDAC data.</description>
<size>33185329832</size>
</item><item>
<title>noaa-ncei-ssmi-ssmis-hydrological-products</title>
<category>Dataset</category>
<infohash>d758adddb898d3b8c5a02c703774be98c6ffbbf3</infohash>
<guid>https://academictorrents.com/details/d758adddb898d3b8c5a02c703774be98c6ffbbf3</guid>
<link>https://academictorrents.com/details/d758adddb898d3b8c5a02c703774be98c6ffbbf3</link>
<description>The Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I) was a seven channel passive microwave radiometer which became operational in July 1987. The SSM/I series was replaced by an advanced sensor, the SSM/IS (Special Sensor Microwave Imager Sounder), which became operational in November of 2005. Monthly averaged SSM/I and SSMIS products include precipitation, cloud liquid water, total precipitable water, snow cover, and sea-ice extent. These products can be used to evaluate the mean climate state and its interannual/seasonal variations, and to detect anomalies associated with large-scale (e.g. El NiÃ±o-Southern Oscillation (ENSO) and Arctic Oscillation) and regional climatic variations. A time series of the entire SSM/I and SSMIS archive includes data from July 1987âpresent.</description>
<size>84960742721</size>
</item><item>
<title>nooa-ncei-ndbc-hfradar-radial</title>
<category>Dataset</category>
<infohash>540be3f4732dc099ff43b77882b53830ccc980af</infohash>
<guid>https://academictorrents.com/details/540be3f4732dc099ff43b77882b53830ccc980af</guid>
<link>https://academictorrents.com/details/540be3f4732dc099ff43b77882b53830ccc980af</link>
<description>This dataset contains surface ocean radial velocities data obtained by HF-radar from stations located along coastal waters of the United States. Radial velocity files contain metadata in a key-value format, while the measured velocities and associated ancillary data are reported in a tab-delimited format. The National Data Buoy Center (NDBC) â in collaboration with the Scripps Institution of Oceanography through June 2025 and thereafter with NOAAâs National Environmental Satellite, Data, and Information Service (NESDIS) â assembles the data from the Integrated Ocean Observing System (IOOS) Surface Currents Programâs High-Frequency (HF) Radar National Network and submits the data on a monthly basis to NOAAâs National Centers for Environmental Information (NCEI). Remote sensing of ocean surface velocity from shore-based HF-radar sensors bridges an operational observational gap between point samples obtained from in-situ sampling and synoptic scale relatively low resolution data obtained from satellites by providing continuous mesoscale coverage at relatively high resolution near the coast. HF-radar systems measure the speed of ocean surface currents in directions radial to the antenna in near real time. Radial velocities alone are a measurement of surface ocean velocity projected along the direction radial to the antenna, which only show those currentsâ movement towards or away from the HF-radar sensorâs antenna. Radial measurements of ocean velocity may be used directly in some applications such as model assimilation but are commonly used in combination with overlapping sites to estimate the total vector ocean velocity. Systems operate continuously in all weather conditions and are installed near the coastline. Range resolution of measured currents is determined by the radar transmit bandwidth used. Bandwidth is controlled by radio frequency licenses and translates to range resolutions of 0.5 to 6 kilometers. Maximum ranges of current measurements also depend on radar transmit frequency and vary from about 40 km offshore to about 200 km offshore. Velocities are measured in the upper 0.3 - 2.5 meters of the ocean depending on the operating frequency and vertical velocity shear profile.</description>
<size>868392817954</size>
</item><item>
<title>epa-water-quality-analysis-restoration-data</title>
<category>Dataset</category>
<infohash>84315cd5876c68546cab11ba65b97553bc0d4543</infohash>
<guid>https://academictorrents.com/details/84315cd5876c68546cab11ba65b97553bc0d4543</guid>
<link>https://academictorrents.com/details/84315cd5876c68546cab11ba65b97553bc0d4543</link>
<description>Contains direct file downloads available through the Water section of the EPA data site, including data from: - Community Financing (FACT, Water Finance Clearinghouse) - FRS - NHDPlus - NLFA - NPS - WATERS - WQP - WSIO - BEACON 2.0</description>
<size>164283513869</size>
</item><item>
<title>noaa-nesdis-star-coral-reef-watch</title>
<category>Dataset</category>
<infohash>5afeac2b34c9514ac96ad2968ad8cb85a975f2e4</infohash>
<guid>https://academictorrents.com/details/5afeac2b34c9514ac96ad2968ad8cb85a975f2e4</guid>
<link>https://academictorrents.com/details/5afeac2b34c9514ac96ad2968ad8cb85a975f2e4</link>
<description>Contains full data used by the Coral Reef Watch webportal. Coral reefs are one of Earth s most diverse ecosystems. They provide significant ecological, economic, and societal benefits valued, globally, at about USD$9.8 trillion each year (de Groot et al. 2012, Costanza et al. 2014). Unfortunately, reefs worldwide are threatened by an increasing array of impacts, primarily from bleaching heat stress, unsustainable fishing practices, and land-based pollution. First observed in the early 1980s, mass coral bleaching (whereby corals bleach over a wide area that can span tens, hundreds, or even thousands of kilometers) has become one of the most visible and damaging marine ecological impacts of persistently rising ocean temperatures. Bleaching is the process by which corals lose the symbiotic algae that give them their distinctive colors and main energy sources. If a coral is severely bleached, disease and death become likely. Severe coral bleaching has become more extensive, frequent, and intense. This can be seen in the acceleration of heat stress events that cause mass bleaching, and in new multi-decadal bleaching observation datasets. As manifested by the devastating 2014-2017 global coral bleaching event (now considered the longest, most widespread and most damaging coral bleaching event on record), mass bleaching events around the globe are often lasting many months; are becoming an annual event; and are impacting coral reefs that never bleached before. It s clear that remotely monitoring coral reefs and providing actionable intelligence are critical for early detection, on-the-ground response, communication, and enhancing coral reef resilience. To address a defined need of coral reef managers around the world, NOAA established the Coral Reef Watch (CRW) program in 2000. For more than 20 years, NOAA CRW has utilized remote sensing, modeled and in-situ data to predict, observe, and alert users globally to threats to the coral reef environment. The near real-time satellite products and modeled Outlooks that comprise CRW s global early-warning system of coral reef environmental changes have successfully and accurately predicted and monitored all major mass coral bleaching events observed globally since 1997, and have provided other critical information to users, especially during periods of severe ocean heat stress. NOAA CRW serves all coral reef ecosystem managers with custodial duties for tropical coral reefs; in-water coral reef monitoring networks; the private sector (including scuba diving operators); scientific researchers at universities and research organizations; educators; students; and the public. An extensive and diverse community of users regularly apply NOAA CRW s modeled predictions and near real-time satellite-based heat stress products to support conservation, restoration, and resilience-based research and management projects that aim to protect and/or restore coral reefs. Users apply NOAA CRW products to monitor and predict detrimental impacts to coral reefs worldwide; understand links between environmental conditions and ecosystem impacts; assess when reefs are vulnerable or resilient to warming ocean and its impacts (especially coral bleaching and disease); and prepare and prioritize resources to implement timely, effective protective responses and adaptation actions, thereby improving coral reef management and regulation. Coral bleaching response plans, incident action plans, and restoration plans around the world rely on NOAA CRW s Bleaching Alert Levels (recently expanded, in December 2023, to include Bleaching Alert Levels 3-5, in response to the extreme marine heatwaves of 2023), to assist with or help guide planning and implementation of work by in-water monitoring and management networks, including in emergency situations. In response to CRW s modeled predictions, satellite products, early warnings, and frequent communications, users have reduced local stressors during periods of high ocean heat stress and extreme marine heatwaves, including by closing major scuba diving and fishing areas. They also have rescued rare corals; shaded/cooled key nursery reefs; and conducted emergency, in-water operations to remove corals from local reefs and nearshore nurseries and house them, temporarily, in offshore and land-based nurseries. In times of low or no marine heat stress, users also apply CRW products to identify appropriate locations for, and then implement, conservation and restoration initiatives, to give transplanted corals or corals grown in-situ the best chance at survival. Learn more about the impacts of NOAA CRW s work here. NOAA CRW is uniquely qualified to provide essential environmental intelligence. Its extensive partnership network with data providers, scientists, and coral reef managers allows CRW to leverage key partner efforts in the U.S. and internationally, to undertake research to develop the best possible products for its users, and to better understand how stakeholders use its tools. CRW works closely with its users and partners throughout product conceptualization, research, development, implementation, testing/enhancement, and operationalization. CRW provides training domestically and internationally in appropriate product use/application and to garner feedback to improve management tools. This allows NOAA to provide a better understanding of environmental threats to coral reefs and establishes sound practices for the use of CRW s information to enhance resilience-based coral reef management.</description>
<size>1376805413951</size>
</item><item>
<title>noaa-ncei-voluntary-observing-ship-climate-project</title>
<category>Dataset</category>
<infohash>296dbe297890222e11174342982aba10e3529e7d</infohash>
<guid>https://academictorrents.com/details/296dbe297890222e11174342982aba10e3529e7d</guid>
<link>https://academictorrents.com/details/296dbe297890222e11174342982aba10e3529e7d</link>
<description>Contains VOS Climate Project data from the NOAA NCEI PUB HTTPS portal. Background: The international scheme by which ships plying the various oceans and seas of the world are recruited by National Meteorological Services (NMSs) for taking and transmitting meteorological observations is called the World Meteorological Organization (WMO) Voluntary Observing Ships  (VOS) scheme. The forerunner of the scheme dates back as far as 1853, the year in which delegates of ten maritime countries came together at a conference in Brussels, on the initiative of Matthew F. Maury, then director of the United States Navy Hydrographic Office, to discuss his proposal for the establishment of a uniform system for the collection of meteorological and oceanographical data from the oceans and the use of these data for the benefit of shipping in return. The conference accepted his proposal and adopted a standard form of ship s log and a set of standard instructions for the necessary observations. From the very beginning, ships  meteorological observations were recognized as being essential for the provision of safety related meteorological services for ships at sea, as well as for climatological purposes. The Situation Today: At the present time, the contribution which VOS meteorological reports make to operational meteorology, to marine meteorological services and to global climate studies is unique and irreplaceable. During the past few decades, the increasing recognition of the role of the oceans in the global climate system has placed even greater emphasis on the importance of marine meteorological and oceanographical observing systems. One of the major continuing problems facing meteorology is the scarcity of data from vast areas of the world s oceans (the so-called  data sparse areas ) in support of basic weather forecasting, the provision of marine meteorological and oceanographic services, and climate analysis and research. While the new generation of meteorological satellites help to overcome these problems, data from more conventional platforms, in particular the voluntary observing ships, remain essential. These ship observations provide ground truth for the satellite observations, important information which the satellites cannot observe, essential contributions to the data input for the numerical weather prediction (NWP) models, and to provide realtime reports which can be used immediately in services for the mariner. In addition to their use in NWP, reports from ships at sea are also used operationally, even more directly, in the preparation of forecasts and warnings, including those for the Global Maritime Distress and Safety System (GMDSS), and issued specifically for the mariner. Thus, without VOS observations, reliable and timely services for mariners cannot be provided. The VOS Fleet Size: A peak in total VOS was reached in 1984/85 when about 7700 ships worldwide were on the WMO VOS Fleet List. Since then there has been an irregular but marked decline and in June 1994, the Fleet strength had dropped to about 7200 ships. These numbers have continued to decline and are currently estimated at only about 4000 ships worldwide. As might be expected, realtime reports from the VOS are heavily concentrated along the major shipping routes, primarily in the North Atlantic and North Pacific Oceans. The chart below shows the data sparse areas in all the southern hemisphere oceans. While this situation certainly reflects the relatively small numbers of ships sailing in these waters, it also makes it more essential that ships sailing in these areas should be part of the VOS and thus contribute to the global observing program and consequent enhancement of the forecast and warning services to the mariner. Of course, as VOS reports are part of a global data capture program, their reports are of value from all the oceans and seas of the world, and even the well frequented North Atlantic and North Pacific Oceans require more observational data. Data sparse areas in the southern hemisphere oceans Participation: What are the charges to be part of the VOS scheme? THERE ARE NO CHARGES TO THE SHIP OR TO THE OPERATOR. The tested marine meteorological instruments necessary to undertake weather observing at sea are usually supplied free of charge to the ship, installed by a professional from the NMS, usually a trained Port Meteorological Officer (PMO), who will provide advice on the technique of observing at sea, explain the use of the WMO SHIP code and offer guidance on the transmission of the observations from the ship to shore, using the ship s own satcom or terrestrial communications equipment. THERE ARE NO CHARGES TO THE SHIP FOR THE TRANSMISSION OF VOS WEATHER REPORTS. After recruitment into the VOS program, the meteorological instruments will be regularly serviced, without charge to the ship or ship owner, by an official of either the  recruiting NMS  or from the worldwide network of WMO Members who operate the international VOS program.</description>
<size>9219107732</size>
</item><item>
<title>epa-toxic-release-inventory-basic-plus</title>
<category>Dataset</category>
<infohash>c7bb88786cb189ec0a80814449aa618f8996fba4</infohash>
<guid>https://academictorrents.com/details/c7bb88786cb189ec0a80814449aa618f8996fba4</guid>
<link>https://academictorrents.com/details/c7bb88786cb189ec0a80814449aa618f8996fba4</link>
<description>TRI Basic Plus Data Files: Calendar Years 1987- Present Update Status - Includes reporting forms processed as of: October 23, 2024 EPA has been collecting Toxics Release Inventory (TRI) data since 1987. The "Basic Plus" data files include ten file types that collectively contain all of the data fields from the TRI Reporting Form R and Form A. The files themselves are in tab-delimited .txt format and then compressed into a .zip file. File Types and Contents 1a: Facility, chemical, releases and other waste management summary information 1b: Chemical activities and uses 2a: On- and off-site disposal, treatment, energy recovery, and recycling information; non-production-related waste managed quantities; production/activity ratio information; and source reduction activities 2b: Detailed on-site waste treatment methods and efficiency 3a: Transfers off site for disposal and further waste management 3b: Transfers to Publicly Owned Treatment Works (POTWs) (RY1987 - RY2010) 3c: Transfers to Publicly Owned Treatment Works (POTWs) (RY2011 - Present) 4: Facility information 5: Optional information on source reduction, recycling and pollution control (RY2005 - Present) 6: Additional miscellaneous and optional information (RY2010 - Present) Important Notes Quantities of dioxin and dioxin-like compounds are reported in grams, while all other chemicals are reported in pounds. This webpage contains the most recent versions of all TRI data files; facilities may revise previous years  TRI submissions if necessary, and any such changes will be reflected in these files. For this reason, data contained in these files may differ from data used to construct the TRI National Analysis.</description>
<size>2204283611</size>
</item><item>
<title>Reddit comments/submissions 2025-04</title>
<category>Dataset</category>
<infohash>a04a7ae237e9a45a69b4fe1ffe882a61ccc69761</infohash>
<guid>https://academictorrents.com/details/a04a7ae237e9a45a69b4fe1ffe882a61ccc69761</guid>
<link>https://academictorrents.com/details/a04a7ae237e9a45a69b4fe1ffe882a61ccc69761</link>
<description>Reddit comments and submissions from 2025-04 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</description>
<size>54840584252</size>
</item><item>
<title>epa-superfund-data-reports</title>
<category>Dataset</category>
<infohash>27b48935d0336e4807042a9b15d978095875d27d</infohash>
<guid>https://academictorrents.com/details/27b48935d0336e4807042a9b15d978095875d27d</guid>
<link>https://academictorrents.com/details/27b48935d0336e4807042a9b15d978095875d27d</link>
<description>Superfund Data Reports The datasets below cover active and archived contaminated sites evaluated by the Superfund program, including proposed and final National Priorities List (NPL) sites. Sites with Potential Smelting-Related Operations (FOIA 1): This report includes sites that have smelting-related, or potentially smelting-related, indicators in the Superfund database, the Superfund Enterprise Management System (SEMS). The report includes information on the site location as well as contaminants of concern. Completed RODs, ROD Amendments and ESDs (FOIA 2):  Displays completed Records of Decision (RODs), ROD Amendments, and Explanations of Differences (ESDs) for active and archived sites stored in SEMS. All Proposed, Final and Deleted NPL Sites (FOIA 3, FOIA 4, FOIA 5): This dataset is comprised of sites proposed to be added to the NPL, sites on the NPL, and sites deleted from the NPL. Lien on Property (FOIA 12): Displays Superfund lien on property activity, the lien property information, and the parties associated with the lien. Active Site Inventory (List 8R Active): Displays site and location information at active sites in SEMS. An active site is one at which site assessment, removal, remedial, enforcement, cost recovery or oversight activities are being planned or conducted. NPL sites include latitude and longitude information. For non-NPL sites, a brief site status is provided. Archived Site Inventory (List 8R Archived): Displays site and location information at archived sites. An archived site is one at which EPA has determined that assessment has been completed and no further remedial action is planned under the Superfund program. Contaminant of Concern Data for Decision Documents by Media, FYs 1981-2024 (Final NPL, Deleted NPL, and Superfund Alternative Approach Sites): Displays contaminant of concern data from Superfund decision documents issued in fiscal years 1981-2023. Includes final and deleted NPL sites as well as sites with a Superfund Alternative Approach (SAA) agreement in place. Remedy Component Data for Decision Documents by Media, FYs 1981-2024 (Final NPL, Deleted NPL, and Superfund Alternative Approach Sites):  Displays remedy component data for Superfund decision documents issued in fiscal years 1981-2023.  Includes final and deleted NPL sites as well as sites with a Superfund Alternative Approach (SAA) agreement in place. CERCLA to RCRA Site Associations (FOIA 8): Displays sites in the SEMS Active and Archived site inventories that may be associated in some manner to a RCRA facility. Public Settlement Report (FOIA 13): Displays all valid Superfund cost recovery and response enforcement instruments program-to-date. Superfund Alternative Approach Agreement Sites and Settlements (FOIA 14): Displays sites that have Superfund Alternative Approach (SAA) Agreements negotiated. SAA agreements are equivalent to an agreement negotiated at an NPL site. The report also includes settlement details. All Final NPL, Proposed NPL and Deleted NPL sites, for the entire United States (FOIA 15): Displays all NPL site information. This report also includes NPL date, site category, and contaminant information for each NPL site. Industrial/Manufactured Coal Gas Plant Locations (FOIA 16): Displays all sites that have a sub category of Coal gasification (CG), Oil and Gas Refining (OR), or Oil and Gas (OG). Superfund Site Location Information The dataset below provides geospatial information for proposed, final and deleted National Priorities List (NPL) sites and Superfund Alternative Approach sites. Geospatial data may not yet be available for every site in the defined universe. These data represent EPA s current understanding of the total footprint of sites. As site investigation and remediation progress, geospatial information may be modified or refined accordingly. Notice: The Agency is providing this geospatial information as a public service and does not vouch for the accuracy, completeness or currency of data. Data provided by external parties is not independently verified by EPA. These data are made available to the public strictly for informational purposes. Data do not represent EPAâs official position, viewpoint or opinion, express or implied. This information is not intended for use in establishing liability or calculating Cost Recovery Statutes of Limitations and cannot be relied upon to create any rights, substantive or procedural, enforceable by any party in litigation with the United States or third parties. EPA reserves the right to change these data at any time without public notice. Archived Data and Reports CERCLIS was a Superfund data system that EPA decommissioned in 2014 following its deployment of the Superfund Enterprise Management System (SEMS). The datasets and reports in the table below draw upon the final CERCLIS dataset, which represents program progress as of the end of fiscal year 2013. CERCLIS (.dbf) and (.txt) Record Layout File: The record layout provides field names, character types, character positions and field lengths for the CERCLIS dBASEIII+ (.dbf) and CERCLIS ASCII Text (.txt) downloadable files. NPL Sites and Non-NPL Sites (SCAP 12): Displays the sequence of activities undertaken at all sites in CERCLIS, both National Priorities List (NPL) sites and non-NPL sites. The SCAP 12 can be customized to contain only NPL sites or non-NPL sites, as specified in the order. NPL sites include sites proposed to the NPL, sites currently on the final NPL, and sites deleted from the final NPL. Non-NPL sites include sites removed from the proposed NPL, sites withdrawn from the final NPL, sites being addressed as part of another NPL site, and all other non-NPL sites. Where available, the report includes funding information for each action, as well as site characterization data. CERCLIS dBASEIII+Format (.dbf): This file contains 55 dBASEIII+ (.dbf) files when decompressed. These files provide detailed information on hazardous waste sites, potential hazardous waste sites, and remedial activities across the nation. You will need a dBASEIII+ browser or program to read the files. You may also want to view the CERCLIS Record Layout to identify the named fields used. CERCLIS ASCII Text Format (.txt): This file (cerctxt.zip), contains 55 ASCII (.txt) formatted files when decompressed. Combined these files provide detailed information on hazardous waste sites, potential hazardous waste sites, and remedial activities across the nation. Contaminants at CERCLIS Sites (List 10): Lists contaminants recorded for Superfund sites, including the contaminant Chemical Abstract Service (CAS) number and maximum concentration level. Affected media and the cleanup action(s) associated with the contaminant are also included, as well as basic site information for the affected sites. IC/EC in Remedy Decision Documents: This report contains data from official decision documents (e.g., Records of Decision (RODs), ROD Amendments, Explanation of Significant Differences) identifying institutional and/or engineering controls (IC/ECs) that are part of the selected remedy. This report does not indicate that the institutional and engineering controls are currently in place nor does it indicate that the ICs/ECs will be in place once the remedy is complete. It only indicates that the decision to include either of them in the remedy is documented as of the completed date of the document.</description>
<size>164015726</size>
</item><item>
<title>epa-chemical-data-reporting-data</title>
<category>Dataset</category>
<infohash>54badf9e1df6b792f039d732455e78426933196a</infohash>
<guid>https://academictorrents.com/details/54badf9e1df6b792f039d732455e78426933196a</guid>
<link>https://academictorrents.com/details/54badf9e1df6b792f039d732455e78426933196a</link>
<description>EPA has collected Chemical Data Reporting data since 1986. It was previously collected under the Inventory Update Reporting. Collections occur approximately every four years and reporting requirements changed from collection to collection. On this page you can access information from the latest collections, occurring in 2006, 2012, 2016, and 2020. Also includes more sparse data from 1986-2006, along with full documentation for all subject years (contained in the yearly folders).</description>
<size>324899507</size>
</item><item>
<title>epa-aqs-pre-generated-files-dec-2024</title>
<category>Dataset</category>
<infohash>75bd916972ae78cbe59534dd88da55d11c4719f2</infohash>
<guid>https://academictorrents.com/details/75bd916972ae78cbe59534dd88da55d11c4719f2</guid>
<link>https://academictorrents.com/details/75bd916972ae78cbe59534dd88da55d11c4719f2</link>
<description>Description of data and formats This page contains large files of data intended for use by people who understand the EPA ambient air quality monitoring program and data. Other sources of data are available. The data available here is also available via an API. Some contain data summarized on an annual basis (annual summary files), some contain data summarized on a daily basis (daily summary), and some contain raw data (sample data as reported). These are the standard time aggregations EPA calculates and stores (we do not have monthly data). All but the annual summary have data files grouped by parameter: Criteria Gases Particulates Meteorological Toxics (see "Notes on Toxics" below), Ozone Precursors, and Lead Blanks (Blanks are empty cannisters that are measured for speciation quality assurance reasons) The annual summary files are small enough to include all data in one file. Each group has data listed by year, in reverse order, back to 1990. Each table entry has the file name, linked to the file, the size of the (zipped) file, the number of data rows in the file, and the date the file was last modified. EPA will update these files twice per year; in the spring and fall (late May and November). Keep in mind, data collection agencies have up to 6 months to report their data. The files are all comma separated text with a header. Each aggregate level has a different format. Notes on Toxics. 1. While this page contains information on toxics, it is suggested you get toxics data from the EPA toxics archive. It is a value added product that includes additional data quality assurance and reduction. The toxics data is included on this page for those needed it in a consistent format with other AQS data. 2. EPA has several ways of grouping parameters. For the parameters listed here as toxics we have included two groups of parameters: Core HAPS (Hazardous Air Pollutants) and VOCs ( Volatile Organic Compounds). The VOC list was expanded with the December 2015 update. These lists include the parameters as defined in the EPA AQS system as "CORE HAPS" or "VOC". You can view the list of parameters included in either catagory by visiting the AQS Code Tables page for Parameter Classes and searching for CORE_HAPS or VOC. The Air Quality Index (AQI) values are presented in several files. AQI is calculated each day for each monitor for the Criteria Gases and PM10 and PM2.5 (FRM and non FRM). The AQI values are on the respective records in those Daily Summary files. There are also annual summary AQI files that show by CBSA (metro area) or county the annual statistics for AQI (max, number of values in each category, etc.). This file has one record per year per CBSA or county. There are also daily summary files that show the AQI by CBSA or county. These have one record per day per CBSA or county. For reference, there is also a file listing all of the files on this page and the date they were modified. Each file also includes the last change date of each record in the file.</description>
<size>24276712881</size>
</item><item>
<title>nclimgrid-daily-auxiliary</title>
<category>Dataset</category>
<infohash>ecb0491b22d58c3de76be8bbfe0a125fc851abb5</infohash>
<guid>https://academictorrents.com/details/ecb0491b22d58c3de76be8bbfe0a125fc851abb5</guid>
<link>https://academictorrents.com/details/ecb0491b22d58c3de76be8bbfe0a125fc851abb5</link>
<description>Contains just nClimGrid-Daily auxiliary files, accompanying the data files (separate torrent). The product referred to as nClimGrid-Daily is a set of daily gridded fields and area averages of temperature and precipitation that covers the Contiguous United States (CONUS) from 1951 to present and is updated daily. It is related to the monthly version of NClimGrid and NClimDiv, but with a daily temporal resolution. The gridded fields are stored in netCDF format with one file per data month. Area averages for nine types of regions are provided in CSV format with one file per region type and data month. At a resolution of approximately 0.0417 degrees latitude and longitude (nominally 5-km grid), the gridded data provide smoothed representations of the point observations. Since the accuracy of estimates for individual grid points and days can be sensitive to local spatial variability and the ability of the available observations and interpolation technique to capture that variability, the nClimGrid-Daily dataset is recommended for applications that require the aggregation of estimates in space and/or time, such as climate monitoring analyses at regional to national scales.</description>
<size>543500833</size>
</item><item>
<title>noaa-ncei-coaps-shipboard-automated-meteorological-oceanographic-system</title>
<category>Dataset</category>
<infohash>66073bebf0ab0d11d6b261b7585bdeb50b9c7dbb</infohash>
<guid>https://academictorrents.com/details/66073bebf0ab0d11d6b261b7585bdeb50b9c7dbb</guid>
<link>https://academictorrents.com/details/66073bebf0ab0d11d6b261b7585bdeb50b9c7dbb</link>
<description>The Florida State University Center for Ocean-Atmospheric Predictions Studies (COAPS) has been operating a data assembly center (DAC) to collect, quality evaluate, and distribute Shipboard Automated Meteorological and Oceanographic System (SAMOS) observations since 2005. A SAMOS is typically a computerized data logging system that records underway meteorological and near-surface oceanographic observations collected on research vessels. The SAMOS initiative does not provide specific instrumentation for vessels, but instead takes advantage of science quality instrumentation already deployed on research vessels and select merchant ships. The SAMOS initiative provides vessel operators with desired sampling protocols and metadata requirements that will ensure the DAC receives a consistent series of observations from each vessel. The DAC and its partners in U. S. National Oceanic and Atmospheric Administration (NOAA), the University National Oceanographic Laboratory System, the U. S. Coast Guard, and the U. S. Antarctic Program have implemented a series of daily data transmissions from ship-to-shore using an email protocol. A set of observations recorded at one-minute intervals for the previous day arrive at the DAC soon after 0000 UTC and undergo automated quality evaluation. A trained data analyst reviews data and responds directly to vessels at sea when problems are identified. A secondary level of visual quality control is completed after all data from a single ship and day are merged into a common daily file (allowing for delayed data receipts). All quality-evaluated data are freely available to the user community and are distributed to national archive centers. This dataset contains all of these data.</description>
<size>16043210508</size>
</item><item>
<title>noaa-ncei-national-marine-ecosystem-status</title>
<category>Dataset</category>
<infohash>0511dadec387ba851faa15a5c2cc8941981cb260</infohash>
<guid>https://academictorrents.com/details/0511dadec387ba851faa15a5c2cc8941981cb260</guid>
<link>https://academictorrents.com/details/0511dadec387ba851faa15a5c2cc8941981cb260</link>
<description>Includes data files of the 2022  National Marine Ecosystem Status Website. The National Marine Ecosystem Status Website was created to provide a snapshot of major U.S. marine and Great Lakes ecosystem indicators. This site captures the status and trends of eight U.S. ecosystem regions and overall, national status while also providing the opportunity to explore the key indicators for a particular topic area. For some indicators, data is shown at sub-LME scales such as island territories or for particularly important areas like coral reefs. This site provides users with links to more detailed sources of NOAA data and information. Information on this site is sourced from publicly available data and is updated annually. By tracking and communicating these data both spatially and thematically, we can monitor the status of U.S. ocean, Great Lakes and coastal ecosystems that provide food, jobs, security, and well-being to millions of people. This reporting is meant to allow the U.S. population to see the performance of their marine ecosystems.</description>
<size>118850</size>
</item><item>
<title>noaa-ncei-snow-cover-extent</title>
<category>Dataset</category>
<infohash>0d4b0cf04ee0a2740f1a1c70780fcc91f87f56de</infohash>
<guid>https://academictorrents.com/details/0d4b0cf04ee0a2740f1a1c70780fcc91f87f56de</guid>
<link>https://academictorrents.com/details/0d4b0cf04ee0a2740f1a1c70780fcc91f87f56de</link>
<description>This NOAA Climate Data Record (CDR) is a record for the Northern Hemisphere (NH) Snow Cover Extent (SCE) spanning from October 4, 1966 to present, updated monthly after the 10th of each month. Data prior to June 1999 in the NH SCE CDR are based on satellite-derived maps of NH SCE produced weekly by trained NOAA meteorologists. In June 1999 weekly NOAA NH SCE maps ceased production, and were replaced by daily SCE output from the Interactive Multisensor Snow and Ice Mapping System (IMS). The weekly SCE maps are digitized to an 88x88 (cells) Cartesian grid laid over a NH polar stereographic projection. Each grid cell in the NH SCE CDR has a binary value, indicating snow covered or snow free. The NH SCE CDR has been used in international assessments of climate variability and change, and in investigations regarding the role of snow cover in the climate system. Mapping accuracy is such that this product is considered suitable for continental-scale climate studies. The data are updated monthly in netCDF file format with variables including SCE and National Meteorological Center (NMC) grid (88x88 cell) coordinates.</description>
<size>26024322</size>
</item><item>
<title>noaa-ncei-viirs-global-area-coverage</title>
<category>Dataset</category>
<infohash>1bff15cad243cadd9200ae2a272fdb52f1ee201a</infohash>
<guid>https://academictorrents.com/details/1bff15cad243cadd9200ae2a272fdb52f1ee201a</guid>
<link>https://academictorrents.com/details/1bff15cad243cadd9200ae2a272fdb52f1ee201a</link>
<description>The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument represents a significant change from the historical weather satellite observations. For example, there are several differences in observation style, channel, structure, etc. from the Advanced Very High Resolution Radiometer (AVHRR) which flew on low earth orbit satellites since the 1970s. Several longstanding climate data records were developed expressly using the AVHRR instrument. The VIIRS Global Area Coverage (VGAC) dataset is an attempt to more closely bridge the old technology with the new. The VGAC data file provides one orbit of VIIRS observations (22 channels: 14 Reflective Solar Bands; 7 Thermal Emissive Bands and 1 Day/Night Band) that have been degraded in resolution to approximate the AVHRR GAC resolution. Each channel is provided using the same swath that simulates a 3.9 km resolution scan. The moderate resolution VIIRS channels each have the mean value of the radiance in that 3.9 km region. Some of the higher resolution imaging channels (I channels) contain more statistical information, including maximum and minimum radiances, which benefit algorithms like cloud and surface retrievals.</description>
<size>204594401752</size>
</item><item>
<title>ChemY11annotated.pdf</title>
<category>Paper</category>
<infohash>5a19ccf846b424d516cd00099d42ad1f636af82f</infohash>
<guid>https://academictorrents.com/details/5a19ccf846b424d516cd00099d42ad1f636af82f</guid>
<link>https://academictorrents.com/details/5a19ccf846b424d516cd00099d42ad1f636af82f</link>
<description>Textbook for Western Australia Year 11 Chemistry with Annotations</description>
<size>404187973</size>
</item><item>
<title>PhysY11annotated.pdf</title>
<category>Paper</category>
<infohash>137d32ae4896372f68bfb36b5d0d4f5cf1ce89ec</infohash>
<guid>https://academictorrents.com/details/137d32ae4896372f68bfb36b5d0d4f5cf1ce89ec</guid>
<link>https://academictorrents.com/details/137d32ae4896372f68bfb36b5d0d4f5cf1ce89ec</link>
<description>2nd Edition Textbook for Western Australia Year 11 Physics with Annotations</description>
<size>336957523</size>
</item><item>
<title>epa-ord-enviroatlas-data</title>
<category>Dataset</category>
<infohash>464095890fe5795496b9e20ba0d6b93f2c8f74bf</infohash>
<guid>https://academictorrents.com/details/464095890fe5795496b9e20ba0d6b93f2c8f74bf</guid>
<link>https://academictorrents.com/details/464095890fe5795496b9e20ba0d6b93f2c8f74bf</link>
<description>Full download of EPA ORD Enviroatlas data. National Data Most maps at the national extent provide wall-to-wall data coverage for the contiguous U.S. as well as some data for Alaska, Hawaii, Puerto Rico, the U.S. Virgin Islands, and the U.S. Pacific Island territories. There are over 400 data layers at this extent. Many of these data layers are summarized by 12-digit hydrologic unit codes (12-digit HUCs), or sub-watershed basins, and provide approximately 90,000 similarly sized spatial units. Many of these data layers are derived from data with a resolution of 30 m. Ecosystem Markets data layers are available for the nation, showing point and polygon data for ecosystem market initiatives and enabling conditions operating at a variety of scales, from national to local. Populated Places at High Resolution Higher resolution data in EnviroAtlas draws from meter scale urban land cover data, census data, and models.  There are approximately 100 data layers per area. These fine-scale data are consistent for each available populated place, and they are mostly summarized by census block groups. Many of the boundaries for community areas are based on selected block groups within the 2010 US Census Urban Area boundary. EnviroAtlas currently includes fine-scale data for more than 1400 cities and towns centered on 30 U.S. urbanized community areas (shown in the map below). EnviroAtlas Uses an Indicator and Index Approach to Ecosystem Services We select and develop indicators for their ability to provide information about a particular ecosystem service. Many of the indicators are not a direct measurement of a specific ecosystem service but rather provide one piece of a complex puzzle of information. Indicators have been selected for their ability to describe provision, benefits and beneficiaries, and drivers of change of ecosystem services. Some of the community data gets closer to ecosystem services measurements than does the national data because models have been developed that can be applied at the community scale. Taken collectively, a group of indicators can get closer to accurately quantifying an ecosystem service. EnviroAtlas is adding tools allowing the user to take a group of indicators and combine them into an index. The EnviroAtlas team is continuing to develop more robust indicators. EnviroAtlas Relies on Foundational Data The development of indicators relies on the availability of nationally and locally available data sets which provide inputs to models and calculations. EnviroAtlas supports the development of some data sets that provide important inputs. Land cover, for example, is a critical data set that is necessary for the computation of many of the EnviroAtlas data layers.â EnviroAtlas supports the development of the National Land Cover Dataset (NLCD)EXIT, a 30 meter resolution product. We also develop the high resolution land cover, a 1 meter resolution product, that is used for the selected communities. For investigating changes over time, it is important to have land cover data available for multiple time periods. NLCD is produced every 5 years. Other data sets such as stream hydrography, soils, demographics, topography, and economic data, in combination with land cover, are used to produce our indicators.</description>
<size>176227963274</size>
</item><item>
<title> centers-for-medicare-and-medicaid-services-open-data</title>
<category>Dataset</category>
<infohash>5cec78484a95de01d27e09cb4a46c3ed8c01e920</infohash>
<guid>https://academictorrents.com/details/5cec78484a95de01d27e09cb4a46c3ed8c01e920</guid>
<link>https://academictorrents.com/details/5cec78484a95de01d27e09cb4a46c3ed8c01e920</link>
<description>Corrected and updated version.  Contains full records/datasets available through the CMS Open Data API, including: CMS Innovation Center Programs COVID-19 Resources Medicare Current Beneficiary Survey (MCBS) Medicare Shared Saving Program Medicare Value-Based Payment Modifier Program Provider Characteristics Provider Compliance Provider Summary By Type of Service Quality of Care Summary Statistics on Beneficiary Enrollment Summary Statistics on Provider Enrollment Summary Statistics on Use and Payments About: This site gives you direct access to public data released by the Centers for Medicare &amp; Medicaid Services (CMS). Our goal is to make our data readily available in open, accessible, and machine-readable formats. For most available data, you can: Download data in a variety of formats. View and analyze data using interactive tools. Access data through an Application Programming Interface, or API. An API lets developers connect other applications to data in real time. The Centers for Medicare and Medicaid Services (CMS) provides health coverage to more than 100 million people through Medicare, Medicaid, the ChildrenÃÂ¢ÃÂÃÂs Health Insurance Program, and the Health Insurance Marketplace. CMS seeks to strengthen and modernize the NationÃÂ¢ÃÂÃÂs health care system, to provide access to high quality care and improved health at lower costs.</description>
<size>282840999173</size>
</item><item>
<title>centers-for-medicare-and-medicaid-services-data-medicaid-gov</title>
<category>Dataset</category>
<infohash>3302596bb7dbc1daf74d23bbbb1aade8e4a32bb0</infohash>
<guid>https://academictorrents.com/details/3302596bb7dbc1daf74d23bbbb1aade8e4a32bb0</guid>
<link>https://academictorrents.com/details/3302596bb7dbc1daf74d23bbbb1aade8e4a32bb0</link>
<description>Contains full API download of all available datasets on data.medicaid.gov. Program overview Data.Medicaid.gov is a public platform offering open access to a diverse range of datasets related to Medicaid and the Childrenâs Health Insurance Program (CHIP). It is tailored to support policymakers, researchers, and the general public by providing critical data for research, reporting, and analysis. The platform covers various topics, including state Medicaid and CHIP programs, enrollment statistics, spending trends, and quality metrics. With data presented in multiple formats, it promotes transparency, allowing users to track program performance and make informed decisions based on reliable insights.</description>
<size>1250570335</size>
</item><item>
<title> centers-for-medicare-and-medicaid-services-open-payments-data</title>
<category>Dataset</category>
<infohash>2cff19a1a2dcccbabd3a88304d7912c28799a5fd</infohash>
<guid>https://academictorrents.com/details/2cff19a1a2dcccbabd3a88304d7912c28799a5fd</guid>
<link>https://academictorrents.com/details/2cff19a1a2dcccbabd3a88304d7912c28799a5fd</link>
<description>Contains full records from the CMS Open Payments Data API. About openpaymentsdata.cms.gov: The mission of the program is to provide the public with a more transparent health care system. Open Payments collects and publishes information about financial relationships between drug and medical device companies (referred to as "reporting entities") and certain health care providers (referred to as "covered recipients"). These relationships may involve payments to providers for things including but not limited to research, meals, travel, gifts or speaking fees. All information available on the Open Payments database is open to personal interpretation and if there are questions about the data, patients and their advocates should speak directly to the health care provider for a better understanding.</description>
<size>5802424226</size>
</item><item>
<title>"DiagSet-A patches" from: DiagSet: a dataset for prostate cancer histopathological image classification</title>
<category>Dataset</category>
<infohash>cfb14ded5acbfca08ccf4e35e2d5c65085edf5e1</infohash>
<guid>https://academictorrents.com/details/cfb14ded5acbfca08ccf4e35e2d5c65085edf5e1</guid>
<link>https://academictorrents.com/details/cfb14ded5acbfca08ccf4e35e2d5c65085edf5e1</link>
<description>Cancer diseases constitute one of the most significant societal challenges. In this paper, we introduce a novel histopathological dataset for prostate cancer detection. The proposed dataset, consisting of over 2.6 million tissue patches extracted from 430 fully annotated scans, 4675 scans with assigned binary diagnoses, and 46 scans with diagnoses independently provided by a group of histopathologists can be found at https://github.com/michalkoziarski/DiagSet. Furthermore, we propose a machine learning framework for detection of cancerous tissue regions and prediction of scan-level diagnosis, utilizing thresholding to abstain from the decision in uncertain cases. The proposed approach, composed of ensembles of deep neural networks operating on the histopathological scans at different scales, achieves 94.6% accuracy in patch-level recognition and is compared in a scan-level diagnosis with 9 human histopathologists showing high statistical agreement. Link to paper: https://www.nature.com/articles/s41598-024-52183-4</description>
<size>378380418898</size>
</item><item>
<title>Reddit comments/submissions 2025-04</title>
<category>Dataset</category>
<infohash>552f34df5b830d18f98b69541e7e84f2658346b9</infohash>
<guid>https://academictorrents.com/details/552f34df5b830d18f98b69541e7e84f2658346b9</guid>
<link>https://academictorrents.com/details/552f34df5b830d18f98b69541e7e84f2658346b9</link>
<description>Reddit comments and submissions from 2025-04 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>53949816137</size>
</item><item>
<title>usfw-open-data-gis-full-crawl</title>
<category>Dataset</category>
<infohash>b9dc0aae229f4f5a215c8ea542bf1a1bb0892847</infohash>
<guid>https://academictorrents.com/details/b9dc0aae229f4f5a215c8ea542bf1a1bb0892847</guid>
<link>https://academictorrents.com/details/b9dc0aae229f4f5a215c8ea542bf1a1bb0892847</link>
<description>Contains results of full FeatureServer crawl of the US Fisheries and Wildlife Open Data site (https://gis-fws.opendata.arcgis.com/search?collection=dataset). Each dataset contains an item.json with basic metadata, though many do not expose direct GIS file downloads through their ArcGIS data sources.  For those that do, a GeoJSON file has been downloaded for each layer, as that is the only format available through USFW FeatureServers. Datasets are sorted by category if available, if not they are within the Misc folder, and are then sorted by tag.  Includes US Fish and Wildlife Service Open Data.csv, a record of all datasets present.</description>
<size>141466279000</size>
</item><item>
<title>noaa-ncei-sar-winds</title>
<category>Dataset</category>
<infohash>229333d7ea1437a33e307b92bc6f37eb91c6a56c</infohash>
<guid>https://academictorrents.com/details/229333d7ea1437a33e307b92bc6f37eb91c6a56c</guid>
<link>https://academictorrents.com/details/229333d7ea1437a33e307b92bc6f37eb91c6a56c</link>
<description>The NOAA Center for Satellite Applications and Research (STAR) and Office of Satellite Research (OSPO) produce Level-2, high-resolution sea surface wind products based on data captured by Synthetic Aperture Radar (SAR) on board RADARSAT-2, RADARSAT Constellation Mission (RCM), Sentinel-1A and Sentinel-1B satellites. These products have been archived at NCEI to support delayed mode applications including coastal climatologies, synoptic weather study, and wind measurement validation, while also supplying near real-time measurements via NOAA CoastWatch and OSPO. Contains three separate datasets: NOAA high resolution sea surface winds data from Synthetic Aperture Radar (SAR) on the RADARSAT Constellation Mission (RCM) satellites This dataset consists of high-resolution sea surface winds data produced from Synthetic Aperture Radar (SAR) on board the RADARSAT Constellation Mission (RCM) satellites. The basic archive file is a netCDF-4 file containing SAR wind, land mask, and time and earth location information. Images of the SAR wind data in GeoTIFF format are also included. The product covers the geographic extent of the SAR image frame from which it was derived. These SAR-derived high resolution wind products are calculated from high resolution SAR images of normalized radar cross section (NRCS) of the Earth s surface. Backscattered microwave radar returns from the ocean surface are strongly dependent on wind speed and direction. When no wind is present, the surface of the water is smooth, almost glass-like. Radar energy will largely be reflected away and the radar cross section will be low. As the wind begins to blow, the surface roughens and surface waves begin to develop. As the wind continues to blow more strongly, the amplitude of the wave increases, thus, roughening the surface more. As the surface roughness increases, more energy is backscattered and NRCS increases. Moreover, careful examination of the wind-generated waves reveals that these surface wave crests are generally aligned perpendicular to the prevailing wind direction, suggesting a dependence of backscatter on the relative direction between the incident radar energy and the wind direction. NOAA high resolution sea surface winds data from Synthetic Aperture Radar (SAR) on the RADARSAT-2 satellite This dataset consists of high resolution sea surface winds data produced from Synthetic Aperture Radar (SAR) on board the RADARSAT-2 satellite. The basic archive file is a netCDF-4 file containing SAR wind, a land mask, and time and earth location information. Maps of the SAR wind data in GeoTIFF format are also included. The product covers the geographic extent of the SAR image frame from which it was derived. These SAR-derived high resolution wind products are calculated from high resolution SAR images of normalized radar cross section (NRCS) of the Earth s surface. Backscattered microwave radar returns from the ocean surface are strongly dependent on wind speed and direction. When no wind is present, the surface of the water is smooth, almost glass-like. Radar energy will largely be reflected away and the radar cross section will be low. As the wind begins to blow, the surface roughens and surface waves begin to develop. As the wind continues to blow more strongly, the amplitude of the wave increases, thus, roughening the surface more. As the surface roughness increases, more energy is backscattered and NRCS increases. Moreover, careful examination of the wind-generated waves reveals that these surface wave crests are generally aligned perpendicular to the prevailing wind direction, suggesting a dependence of backscatter on the relative direction between the incident radar energy and the wind direction. NOAA high resolution sea surface winds data from Synthetic Aperture Radar (SAR) on the Sentinel-1 satellites This dataset consists of high resolution sea surface winds data produced from Synthetic Aperture Radar (SAR) on board Sentinel-1A and Sentinel-1B satellites. The basic archive file is a netCDF-4 file containing SAR wind, land mask, and time and earth location information. Also included are maps of the SAR winds in GeoTIFF format. The product covers the geographic extent of the SAR image frame from which it was derived. These SAR-derived high resolution wind products are calculated from high resolution SAR images of normalized radar cross section (NRCS) of the Earth s surface. Backscattered microwave radar returns from the ocean surface are strongly dependent on wind speed and direction. When no wind is present, the surface of the water is smooth, almost glass-like. Radar energy will largely be reflected away and the radar cross section will be low. As the wind begins to blow, the surface roughens and surface waves begin to develop. As the wind continues to blow more strongly, the amplitude of the wave increases, thus, roughening the surface more. As the surface roughness increases, more energy is backscattered and NRCS increases. Moreover, careful examination of the wind-generated waves reveals that these surface wave crests are generally aligned perpendicular to the prevailing wind direction, suggesting a dependence of backscatter on the relative direction between the incident radar energy and the wind direction.</description>
<size>1564862824502</size>
</item><item>
<title>DeepSeek-Prover-V2-671B</title>
<category>Dataset</category>
<infohash>efb9521e52d57bd24753d8d742fa3fd900cf568a</infohash>
<guid>https://academictorrents.com/details/efb9521e52d57bd24753d8d742fa3fd900cf568a</guid>
<link>https://academictorrents.com/details/efb9521e52d57bd24753d8d742fa3fd900cf568a</link>
<description>DeepSeek-Prover-V2-671B shares the same architecture as DeepSeek-V3. For detailed information and supported features, please refer to the DeepSeek-V3 documentation on Hugging Face.</description>
<size>1404470963028</size>
</item><item>
<title>United States Forest Service Treesearch scrape with metadata</title>
<category>Dataset</category>
<infohash>925ae379627bc1d8b60d8cb3129a6a2ec816673d</infohash>
<guid>https://academictorrents.com/details/925ae379627bc1d8b60d8cb3129a6a2ec816673d</guid>
<link>https://academictorrents.com/details/925ae379627bc1d8b60d8cb3129a6a2ec816673d</link>
<description>This is a scrape of Treesearch off the United States Forest Service website (https://research.fs.usda.gov/treesearch) taken on 04/17/25. It consists of a json file "usfs_treesearchfinal.json" that contains all the metadata you would find in each treesearch article (year, source, authors, BibTeX citation etc.). You can open it with your browser (note will be slow to open initially). Then in the articles folder all the pdf s with the fulltext of the article are contained if available. Each metadata item contains the pdf_name which can be used to find the paper full text in the articles folder. Note that there were a few articles that had broken links and some didn t have full text downloads available. This scrape was able to collect 62,937 entries from treesearch which contained 62,386 full text articles.</description>
<size>125408266360</size>
</item><item>
<title>noaa-ncei-sea-surface-temperature-optimum-interpolation</title>
<category>Dataset</category>
<infohash>5476dbfcacb18d3e5ba80c845262709ca9c06f0d</infohash>
<guid>https://academictorrents.com/details/5476dbfcacb18d3e5ba80c845262709ca9c06f0d</guid>
<link>https://academictorrents.com/details/5476dbfcacb18d3e5ba80c845262709ca9c06f0d</link>
<description>The NOAA 1/4Â° Daily Optimum Interpolation Sea Surface Temperature (OISST) is a long term Climate Data Record that incorporates observations from different platforms (satellites, ships, buoys and Argo floats) into a regular global grid. The dataset is interpolated to fill gaps on the grid and create a spatially complete map of sea surface temperature. Satellite and ship observations are referenced to buoys to compensate for platform differences and sensor biases. OISST v2.1 replaced v2 on April 1, 2020. V2 stopped production on April 26, 2020 after its input datasets were discontinued. Data are currently available from September 1, 1981âpresent, and updated every day. V2.1 has significant quality improvements for data from January 1, 2016 onward.</description>
<size>166169852476</size>
</item><item>
<title>Pluralsight | Advanced Web Application Penetration Testing with Burp Suite</title>
<category>Course</category>
<infohash>5fbb2badd96b59a3929b98c44710e7551dfaab38</infohash>
<guid>https://academictorrents.com/details/5fbb2badd96b59a3929b98c44710e7551dfaab38</guid>
<link>https://academictorrents.com/details/5fbb2badd96b59a3929b98c44710e7551dfaab38</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/fd30qwPB/advanced-web-copy.png Pluralsight - Advanced Web Application Penetration Testing with Burp Suite Course details Burp suite can help improve your penetration testing. This is an advanced course designed to expand your knowledge of the Burp Suite product to utilize many of the lesser known features offered in the tool. What you ll learn Did you know Burp Suite makes automation, data exfiltration, and customization techniques possible to help make you an even better pentester? This advanced course, Advanced Web Application Penetration Testing with Burp Suite, is designed to expand your knowledge of the Burp Suite product to utilize many of the lesser known features offered in the tool. You will learn how to exploit security vulnerabilities in your target, write your own Burp extension, perform automation with Burp, and more. By the end this course, you ll know how to perform all of these techniques at a comfortable and efficient level to better perform your pentesting tasks. If you are currently a mid-to-senior level developer or pentester and wish to learn about attacking web applications using more features of Burp Suite, then this course is designed for you. General Details: Duration: 1h 50m Updated: Oct 11, 2024 Language: English Source: https://www.pluralsight.com/courses/advanced-web-application-penetration-testing-burp-suite MP4 | Video: AVC, 1280x720p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>218050673</size>
</item><item>
<title>galsbi__template_BlantonRoweis07</title>
<category>Paper</category>
<infohash>31d797687b5b121ad995269b616df45be7ea4b66</infohash>
<guid>https://academictorrents.com/details/31d797687b5b121ad995269b616df45be7ea4b66</guid>
<link>https://academictorrents.com/details/31d797687b5b121ad995269b616df45be7ea4b66</link>
<description/>
<size>380663</size>
</item><item>
<title>galsbi__Moser+24_abc_posterior</title>
<category>Paper</category>
<infohash>5e3782909e1e2659aa5a3de4200b46280cccf630</infohash>
<guid>https://academictorrents.com/details/5e3782909e1e2659aa5a3de4200b46280cccf630</guid>
<link>https://academictorrents.com/details/5e3782909e1e2659aa5a3de4200b46280cccf630</link>
<description/>
<size>483520</size>
</item><item>
<title>galsbi__lambda_sfd_ebv</title>
<category>Paper</category>
<infohash>4489214591f73c3f239ab44c9e415d8ba4550c15</infohash>
<guid>https://academictorrents.com/details/4489214591f73c3f239ab44c9e415d8ba4550c15</guid>
<link>https://academictorrents.com/details/4489214591f73c3f239ab44c9e415d8ba4550c15</link>
<description/>
<size>12537501</size>
</item><item>
<title>galsbi__HSC_tables</title>
<category>Paper</category>
<infohash>ee636afc5abc73a9547da08ba7ba9402f23df15b</infohash>
<guid>https://academictorrents.com/details/ee636afc5abc73a9547da08ba7ba9402f23df15b</guid>
<link>https://academictorrents.com/details/ee636afc5abc73a9547da08ba7ba9402f23df15b</link>
<description/>
<size>628095717</size>
</item><item>
<title>galsbi__Fischbacher+24_abc_posterior</title>
<category>Paper</category>
<infohash>be7961f788f77e68ce1b0ab42e4669098d1cf52b</infohash>
<guid>https://academictorrents.com/details/be7961f788f77e68ce1b0ab42e4669098d1cf52b</guid>
<link>https://academictorrents.com/details/be7961f788f77e68ce1b0ab42e4669098d1cf52b</link>
<description/>
<size>491493</size>
</item><item>
<title>galsbi__emulator_moser24</title>
<category>Paper</category>
<infohash>12db618239e23e5ea5a32153315be7b01dad6e52</infohash>
<guid>https://academictorrents.com/details/12db618239e23e5ea5a32153315be7b01dad6e52</guid>
<link>https://academictorrents.com/details/12db618239e23e5ea5a32153315be7b01dad6e52</link>
<description/>
<size>7727858004</size>
</item><item>
<title>galsbi__emulator_fischbacher24</title>
<category>Dataset</category>
<infohash>660afcd2a71982899c5ecea287cefeb09bdfcc84</infohash>
<guid>https://academictorrents.com/details/660afcd2a71982899c5ecea287cefeb09bdfcc84</guid>
<link>https://academictorrents.com/details/660afcd2a71982899c5ecea287cefeb09bdfcc84</link>
<description/>
<size>55822318</size>
</item><item>
<title>noaa-ncei-land-surface-reflectance</title>
<category>Dataset</category>
<infohash>5bb7bc141b28e13b6622a0edd50adaeeee333ed7</infohash>
<guid>https://academictorrents.com/details/5bb7bc141b28e13b6622a0edd50adaeeee333ed7</guid>
<link>https://academictorrents.com/details/5bb7bc141b28e13b6622a0edd50adaeeee333ed7</link>
<description>The Surface Reflectance â Polar Orbiter Climate Data Record (CDR) contains gridded daily surface reflectance and brightness temperatures derived from both the Advanced Very High Resolution Radiometer (AVHRR) and the Visible Infrared Imaging Radiometer Suite (VIIRS) sensors onboard NOAA polar orbiting satellites. Surface reflectance from AVHRR channels 1 and 2 (at 630 and 865 nm) and VIIRS channels I1, I2, and I3 (at 640, 865, and 1610 nm) are central to this NOAA CDR. The AVHRR dataset spans from 1981 to 2013 and the VIIRS dataset spans from 2014 to 10 days before the present. Output is generated daily on a 0.05Â° by 0.05Â° grid.</description>
<size>2140558640385</size>
</item><item>
<title>noaa-ncei-iasi-sim-hirs-rad</title>
<category>Dataset</category>
<infohash>bab5734902386005a2ff715b7a682428e7e97e92</infohash>
<guid>https://academictorrents.com/details/bab5734902386005a2ff715b7a682428e7e97e92</guid>
<link>https://academictorrents.com/details/bab5734902386005a2ff715b7a682428e7e97e92</link>
<description>Infrared Atmospheric Sounding Interferometer (IASI) Simulated High-resolution Infrared Radiation Sounder (HIRS) Radiances, Version 2b The dataset includes radiances for HIRS channels 1-12 that are simulated from IASI swath measurements on Metop-A, B, and C, at the spatial resolution of the original IASI pixel resolution. It is an intermediate product of the NOAA Climate Data Record (CDR) of IR Sounder Upper Tropospheric Humidity BT, Version 4.0. The dataset has a global coverage. The development of the dataset is an effort to extend the legacy HIRS products using the new generation IASI instruments.</description>
<size>31313693379</size>
</item><item>
<title>Coursera | Programming For Designers Specialization</title>
<category>Course</category>
<infohash>f58a74aad31ae9ac4c14f3e30414d93fe8c1ce8d</infohash>
<guid>https://academictorrents.com/details/f58a74aad31ae9ac4c14f3e30414d93fe8c1ce8d</guid>
<link>https://academictorrents.com/details/f58a74aad31ae9ac4c14f3e30414d93fe8c1ce8d</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/CsnhTH7d/programming-for-designers.png Coursera - Programming for Designers Specialization Course details Develop a foundation in Computational Design. Explore Creative Coding with Python What you ll learn - Learn the fundamentals of Python programming, including essential coding techniques - Engage in computational design thinking to approach design problems with a mindset that leverages computational strategy and problem-solving - Understand how to develop custom algorithms that can generate a range of design solutions against complex requirements, constraints, and objectives - Demonstrate the application of computational methods in design-related disciplines using a variety of computational tools Specialization - 3 course series In Programming for Designers, you will explore Python programming within a creative context, equipping you with essential computational design skills. Beginning with fundamental programming principles, you will move on to more intricate data structures, leading to the development of practical creative coding projects. Learn how to use the Processing platform, a program that allows designers to create visual, interactive media to meet their project needs. Develop the skills to move from simple to intricate designs, ranging from illustrative shapes and images to animations. Cover procedural best practices for design applications and intelligence navigation, and build a rich understanding of how advanced data structures can be used to create digital environments. This course series is tailored for individuals within architecture, graphic design, industrial design, game design and the visual arts interested in integrating programming with graphic creativity. As each course in the series is structured to build on previous course knowledge, Programming for Designers allows you to practice your skills within Python, allowing you to bring your design concepts to life with precision and efficiency. Applied Learning Project Participants will create graphic applications in Python through the Processing environment. Access to a comprehensive series of design examples tailored for the course is provided, along with instructions to build each from the ground up. This approach covers essential principles and leads to the development of personalized creative applications. Skills you’ll gain - Object Oriented Programming Language - Programming graphics - Design - Computational Design - Data Structures - Python Programming - Object Oriented Programming (OOP) - Processing (Programming environment) - Computational thinking Author: Jose Sanchez, Offered by University of Michigan General Details: Duration: 31h 18m 21s Updated: 04/2025 Language: English Subtitle: .SRT Included Source: https://www.coursera.org/specializations/programming-for-designers MP4 | Video: AVC, 1280x720p | Audio: AAC, 44.100 KHz, 2 Ch | Course material included</description>
<size>6125120437</size>
</item><item>
<title>Code with Mosh | Spring Boot Mastering REST API Development</title>
<category>Course</category>
<infohash>21d8bc06c7ec53194e7b56867464dc7f8ec2788b</infohash>
<guid>https://academictorrents.com/details/21d8bc06c7ec53194e7b56867464dc7f8ec2788b</guid>
<link>https://academictorrents.com/details/21d8bc06c7ec53194e7b56867464dc7f8ec2788b</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/XxwFMKZB/spring-boot-part1.png Code with Mosh - Spring Boot Mastering REST API Development Overview A Course You ll Actually Finish Spring Boot: Mastering REST API Development Build REST APIs, secure them with Spring Security, process payments with Stripe, and deploy like a pro About the Course Spring Boot is one of the most in-demand frameworks for modern backend development—and this course takes you beyond the basics to help you build, secure, and deploy real-world applications with confidence. This is Part 2 of the ultimate Spring Boot series. In Part 1, you mastered the fundamentals. In this part, we’ll put that knowledge into practice by building and deploying the backend for a real e-commerce application. You’ll learn how to build clean and secure RESTful APIs, implement authentication and role-based access control, integrate with Stripe for payment processing, and deploy your application to the cloud. This course goes beyond CRUD and boilerplate setups. You’ll learn how real production systems are built—with a strong focus on clean code, modular architecture, and real-world best practices. Whether you’re preparing for a job, planning to build your own app, or just want to take your backend skills to the next level, this course is designed to get you there. What you ll learn - Build RESTful APIs using Spring Boot - Validate incoming requests and handle errors with custom logic - Secure your APIs with Spring Security and JWT authentication - Implement role-based access control for protected resources - Structure your application using clean, maintainable architecture - Build a shopping cart system with full checkout functionality - Integrate Stripe Checkout to process real payments - Configure environment-specific settings using Spring profiles - Deploy your app and database to the cloud - Apply industry best practices for writing clean, testable, and production-ready code What You ll Build You’ll build the backend for a full-featured e-commerce application—complete with authentication, role-based access, a shopping cart, checkout flow, payment processing with Stripe, and cloud deployment. This isn’t a basic CRUD app—it’s a real-world project that mirrors what professional backend developers build in production. Who Is This For? - Developers who ve learned the basics of Spring Boot and want to go deeper - Those who are tired of toy examples and want to build real-world projects - Aspiring backend developers preparing for a Spring Boot job or freelance work Prerequisites This is Part 2 of my Spring Boot: Mastering the Fundamentals course. Ideally, you should have completed Part 1 or already be familiar with key Spring Boot concepts like beans and dependency injection, entities, repositories, and Flyway migrations. New to Spring Boot? Start with Part 1 and come back when you re ready to build real-world APIs. Author Hi! I m Mosh Hamedani. I’ve spent 20+ years in software engineering, and my goal isn’t just to teach you to code — it’s to help you think like a professional software engineer, master problem-solving, and build skills you’ll use for life. General Details: Duration: 8h 58m 41s Updated: 04/2025 Language: English Source: https://codewithmosh.com/p/spring-boot-building-apis MP4 | Video: AVC, 1920x1080p | Audio: AAC, 48.000 KHz, 2 Ch</description>
<size>1585840976</size>
</item><item>
<title>{DIODE}: {A} {D}ense {I}ndoor and {O}utdoor {DE}pth {D}ataset</title>
<category>Dataset</category>
<infohash>464be310042eb618f3720ce27d1276ec9641ebff</infohash>
<guid>https://academictorrents.com/details/464be310042eb618f3720ce27d1276ec9641ebff</guid>
<link>https://academictorrents.com/details/464be310042eb618f3720ce27d1276ec9641ebff</link>
<description/>
<size>89529609218</size>
</item><item>
<title>enwiki-20250501-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>cd872797612d95384de3a0ab7e6a1f156bf91495</infohash>
<guid>https://academictorrents.com/details/cd872797612d95384de3a0ab7e6a1f156bf91495</guid>
<link>https://academictorrents.com/details/cd872797612d95384de3a0ab7e6a1f156bf91495</link>
<description>English Wikipedia Multistream 2025-05-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24980844834</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2023</title>
<category>Dataset</category>
<infohash>4a5dc447e3a3e0b338abaa26517689c5e804c13f</infohash>
<guid>https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f</guid>
<link>https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: (this torrent) - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. These torrents also have had to make some modifications to the original source structure in order to fit within the 10MB torrent limit on academictorrents ([see issue](https://github.com/academictorrents/academictorrents-docs/issues/46)). The primary contributor to torrent size if the duplication of the file names in hybrid torrents, so the original meca filenames have been replaced with the DOI suffix for the item in the meca. If you seed this torrent, consider also snatching and seeding the v2-only torrent which will be uploaded to sciop shortly, which should be a much more efficient torrent. Individual item metadata is contained within the JATS XML of the meca (a meca is just a zip file, so it can be read without decompressing the whole archive), but some summary metadata is included for indexing purposes: -  doi_map.json : maps the item DOI to the location within the torrent -  license_map.json : maps the license to the meca -  license_counts.json : summary statistics for each license kind -  errors.json : any errors that were encountered while creating the torrent. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>1052115992576</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2022</title>
<category>Dataset</category>
<infohash>d33e08e51ece62509bc72042514a19a66d225bd6</infohash>
<guid>https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6</guid>
<link>https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: (this torrent) - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. These torrents also have had to make some modifications to the original source structure in order to fit within the 10MB torrent limit on academictorrents ([see issue](https://github.com/academictorrents/academictorrents-docs/issues/46)). The primary contributor to torrent size if the duplication of the file names in hybrid torrents, so the original meca filenames have been replaced with the DOI suffix for the item in the meca. If you seed this torrent, consider also snatching and seeding the v2-only torrent which will be uploaded to sciop shortly, which should be a much more efficient torrent. Individual item metadata is contained within the JATS XML of the meca (a meca is just a zip file, so it can be read without decompressing the whole archive), but some summary metadata is included for indexing purposes: -  doi_map.json : maps the item DOI to the location within the torrent -  license_map.json : maps the license to the meca -  license_counts.json : summary statistics for each license kind -  errors.json : any errors that were encountered while creating the torrent. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>918317694976</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2021</title>
<category>Dataset</category>
<infohash>c8ab36be273872466f6a391af4e42e6541c8e65a</infohash>
<guid>https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a</guid>
<link>https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: (this torrent) - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. These torrents also have had to make some modifications to the original source structure in order to fit within the 10MB torrent limit on academictorrents ([see issue](https://github.com/academictorrents/academictorrents-docs/issues/46)). The primary contributor to torrent size if the duplication of the file names in hybrid torrents, so the original meca filenames have been replaced with the DOI suffix for the item in the meca. If you seed this torrent, consider also snatching and seeding the v2-only torrent which will be uploaded to sciop shortly, which should be a much more efficient torrent. Individual item metadata is contained within the JATS XML of the meca (a meca is just a zip file, so it can be read without decompressing the whole archive), but some summary metadata is included for indexing purposes: -  doi_map.json : maps the item DOI to the location within the torrent -  license_map.json : maps the license to the meca -  license_counts.json : summary statistics for each license kind -  errors.json : any errors that were encountered while creating the torrent. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>918451912704</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2020</title>
<category>Dataset</category>
<infohash>b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c</infohash>
<guid>https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c</guid>
<link>https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: (this torrent) - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). Scraped on an annual basis, with the initial upload in March 2025 partially complete. ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. These torrents also have had to make some modifications to the original source structure in order to fit within the 10MB torrent limit on academictorrents ([see issue](https://github.com/academictorrents/academictorrents-docs/issues/46)). The primary contributor to torrent size if the duplication of the file names in hybrid torrents, so the original meca filenames have been replaced with the DOI suffix for the item in the meca. If you seed this torrent, consider also snatching and seeding the v2-only torrent which will be uploaded to sciop shortly, which should be a much more efficient torrent. Individual item metadata is contained within the JATS XML of the meca (a meca is just a zip file, so it can be read without decompressing the whole archive), but some summary metadata is included for indexing purposes: -  doi_map.json : maps the item DOI to the location within the torrent -  license_map.json : maps the license to the meca -  license_counts.json : summary statistics for each license kind -  errors.json : any errors that were encountered while creating the torrent. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>885920890880</size>
</item><item>
<title>noaa-ncei-satellite-ocean-heat-content-suite</title>
<category>Dataset</category>
<infohash>ebfb85f2efdea5a0bb65098ecf33ba655cfc7ff3</infohash>
<guid>https://academictorrents.com/details/ebfb85f2efdea5a0bb65098ecf33ba655cfc7ff3</guid>
<link>https://academictorrents.com/details/ebfb85f2efdea5a0bb65098ecf33ba655cfc7ff3</link>
<description>This collection contains an operational Satellite Ocean Heat Content Suite (SOHCS) product generated by NOAA National Environmental Satellite, Data, and Information Service (NESDIS). The operational algorithm implemented was developed at the University of Miami/Rosenstiel School of Marine and Atmospheric Science (RSMAS). The SOHCS product measures the integrated vertical temperature from the sea surface to the depth of the 26Â°C isotherm. The Algorithm uses a reduced gravity model to estimate the 20 degree isotherm depth based on objectively analyzed blended sea surface height anomaly fields from operational altimeters (Satellite with ARgos and ALtiKa (SARAL), Jason-1, Jason-2 and Cryosat-2) and Geo-Polar blended SST analyses. The data consists of seven parameters including sea surface height anomaly and its mapping error, depth of the 20Â° and 26Â° Celsius isotherm, mixed layer depth, ocean heat content and sea surface temperature. The grid spacing of the data in both latitude and longitude is 0.25Â°.</description>
<size>220389839148</size>
</item><item>
<title>noaa-ncei-us-historical-climatology-network-v2-v2.5</title>
<category>Dataset</category>
<infohash>51d37be876aef156af3a795603c9412d94089561</infohash>
<guid>https://academictorrents.com/details/51d37be876aef156af3a795603c9412d94089561</guid>
<link>https://academictorrents.com/details/51d37be876aef156af3a795603c9412d94089561</link>
<description>U.S. Historical Climatology Network (USHCN) data are used to quantify national and regional-scale temperature changes in the contiguous United States (CONUS). The dataset provides adjustments for systematic, non-climatic changes that bias temperature trends of monthly temperature records of long-term COOP stations. USHCN is a designated subset of the NOAA Cooperative Observer Program (COOP) Network, with sites selected according to their spatial coverage, record length, data completeness, and historical stability. Version 2.5 was released as a revision to the version 2.0 dataset In October 2012. The processing steps for version 2 and 2.5 are essentially the same, but the version number change reflects modifications to the underlying database as well as coding changes to the pairwise homogenization algorithm (PHA) that improve its overall efficiency. Table 1 (below) lists these modifications. NCEI Technical Reports GHCNM-12-01R (Williams et al., 2012a) and GHCNM-12-02 (Williams et al. 2012b) provide details regarding the PHA modifications. Version 2 monthly temperature data incorporated an expanded database of raw temperature values from COOP stations, a new set of quality control checks, and a more comprehensive homogenization algorithm. The version 2 temperature dataset and processing steps detailed description are in Menne et al. (2009). Please see the readme file in the v2.5 directory for information on downloading and reading the v2.5 data. The status file provides information about the processing of the USHCN v2.5 data. Pairwise Homogeneity Algorithm (PHA) software (Menne and Williams 2009 ) version 52i is used to detect and adjust for documented and undocumented inhomogeneities in the USHCN version 2.5 monthly temperature dataset. PHA version 52d was used to adjust the v2 dataset Please refer to the readme file in these directories for guidance on how to download, deuncompress, compile, and run the pairwise homogenization software. The "tar/g zipped" file contains all of the necessary software to run the pairwise homogenization procedure. A simulated test dataset is included with the software along with a file of the expected output. Use the file of the expected output to verify proper execution of the code.</description>
<size>266291831</size>
</item><item>
<title>noaa-ncei-halocarbons-and-other-atmospheric-trace-gas-species-esrl-gmd</title>
<category>Dataset</category>
<infohash>036ae534dcd62cfd97371f444926ac2fddbadf3b</infohash>
<guid>https://academictorrents.com/details/036ae534dcd62cfd97371f444926ac2fddbadf3b</guid>
<link>https://academictorrents.com/details/036ae534dcd62cfd97371f444926ac2fddbadf3b</link>
<description>The Halocarbons and other Atmospheric Trace Species (HATS) group aims to quantify the atmospheric burden, and the distributions and magnitudes of sources and sinks for nitrous oxide and other halogen containing compounds. They utilize numerous types of platforms, including ground-based stations, towers, ships, aircraft, and balloons to accomplish their mission. HATS measures chlorofluorocarbons (CFCs) at measurement sites spanning the globe. CFCs are non-toxic, non-flammable chemicals that contain carbon, chlorine, and fluorine atoms. CFCs were used as solvents, refrigerants, and aerosol sprays. While inert in the troposphere, they decompose in the stratosphere to release chlorine for destructive reactions with ozone. This process eventually led to the creation of the "Ozone Hole" over the Antarctic. Monitoring the amounts of CFCs and other trace gases is important, both for tracking the growth and recovery of the Ozone Hole, and because many upward trending trace gases are potent and durable greenhouse gases. Original in-situ sampling electron capture gas chromatographs ("RITS"): The Radiatively Important Trace Species (RITS) program consisted of five stand-alone systems that were used to make in-situ measurements at Barrow, AK (BRW), Mauna Loa, HI (MLO), American Samoa (SMO), South Pole, Antarctica (SPO), and Niwot Ridge, CO (NWR) from 1983 until 2001 when the last of the systems was retired. The RITS systems were replaced by the next-generation CATS systems that have remained operational since then. The RITS systems measured nitrous oxide (N2O), the chlorofluorocarbons CFC-12 (CCl2F2), CFC-11 (CCl3F), and CFC-113 (CCl2F-CClF2, although quality measurements of this gas have been nullified by the lack of stable references during the RITS period), methyl chloroform (CH3CCl3), and carbon tetrachloride (CCl4) once per hour. Through the Big Earth Data Initiative (BEDI), ESRL/GMD has taken their data collection and converted files into NetCDF-4, a self-describing format.</description>
<size>1274285451</size>
</item><item>
<title>noaa-ncei-ocean-altimetry</title>
<category>Dataset</category>
<infohash>d20e8700e1ed7973c0b4ed17ef764414e76efbf9</infohash>
<guid>https://academictorrents.com/details/d20e8700e1ed7973c0b4ed17ef764414e76efbf9</guid>
<link>https://academictorrents.com/details/d20e8700e1ed7973c0b4ed17ef764414e76efbf9</link>
<description>This dataset contains global and regional mean sea level time series and trend maps calculated on a continual basis since December 1992 by Laboratory for Satellite Altimetry (LSA), Center for Satellite Applications and Research (STAR), NESDIS/NOAA. They are based on data from the series of reference missions (TOPEX/Poseidon, Jason-1 and Jason-2) that provide global mean sea level about every ten days with an uncertainty of 2-4 mm. In addition, estimates of mean sea level are also calculated from other satellite altimeter missions, including Geosat Follow-on, Envisat, ERS-1, and ERS-2.</description>
<size>4304906</size>
</item><item>
<title>noaa-ncei-land-normalized-difference-vegetation-index</title>
<category>Dataset</category>
<infohash>d7c94efb457b7094b8f7497563801b62b9e49278</infohash>
<guid>https://academictorrents.com/details/d7c94efb457b7094b8f7497563801b62b9e49278</guid>
<link>https://academictorrents.com/details/d7c94efb457b7094b8f7497563801b62b9e49278</link>
<description>Contains two datasets: AVHRR- This dataset contains gridded daily Normalized Difference Vegetation Index (NDVI) derived from the NOAA Climate Data Record (CDR) of Advanced Very High Resolution Radiometer (AVHRR) Surface Reflectance. The data record spans from 1981 to 10 days before the present using data from eight NOAA polar orbiting satellites: NOAA-7, -9, -11, -14, -16, -17, -18 and -19. The data are projected on a 0.05 degree x 0.05 degree global grid. This dataset is one of the Land Surface CDR Version 5 products produced by the NASA Goddard Space Flight Center (GSFC) and the University of Maryland (UMD). Improvements for Version 5 include using the improved surface reflectance data, correcting the data for known errors in time, latitude, and longitude variables, as well as improvements in the global and variable attribute definitions. The dataset is in the netCDF-4 file format following ACDD and CF Conventions. The dataset is accompanied by algorithm documentation, data flow diagram and source code for the NOAA CDR Program. VIIRS- This dataset contains gridded daily Normalized Difference Vegetation Index (NDVI) derived from the NOAA Climate Data Record (CDR) of Visible Infrared Imaging Radiometer Suite (VIIRS) Surface Reflectance. The data record spans from 2014 to 10 days before the present using data from NOAA polar orbiting satellites. The data are projected on a 0.05 degree x 0.05 degree global grid. This dataset is one of the Land Surface CDR products produced by the NASA Goddard Space Flight Center (GSFC) and the University of Maryland (UMD). The dataset is in the netCDF-4 file format following ACDD and CF Conventions. The dataset is accompanied by algorithm documentation, data flow diagram and source code for the NOAA CDR Program.</description>
<size>1078357679116</size>
</item><item>
<title>noaa-ncei-total-solar-irradiance</title>
<category>Dataset</category>
<infohash>7b69cf35d54aa7c9980af748e9b15a56271ce4c7</infohash>
<guid>https://academictorrents.com/details/7b69cf35d54aa7c9980af748e9b15a56271ce4c7</guid>
<link>https://academictorrents.com/details/7b69cf35d54aa7c9980af748e9b15a56271ce4c7</link>
<description>The Total Solar Irradiance (TSI) Climate Data Record (CDR) measures the spectrally integrated energy input to the top of the Earth s atmosphere at a base mean distance from the Sun (i.e., one Astronomical Unit). The TSI units are Watts per square meter (W m-2). This CDR is constructed using Version 1 of the NASA NOAA LASP (NNL) solar variability models that identify and quantify irradiance change relative to baseline reference Sun conditions at daily, monthly, and yearly intervals. This CDR applies model coefficients derived from linear regression to proxies of bolometric (i.e., integrated over all wavelengths) change due to facular brightening and sunspot darkening, and compares the results to a composite record of total solar irradiance between 2003 and 2024. The TSI record is made up of individual measurements made by Total Irradiance Monitor (TIM) instruments on the SOlar Radiation and Climate Experiment (SORCE) mission, the TSI Continuity Transfer Experiment (TCTE), the Total and Spectral Irradiance Sensor (TSIS-1) mission, and the Compact Total Irradiance Monitor (CTIM) flight demonstration. The daily and monthly data records span from 1874 to present, and the yearly data record spans from 1610 to present.</description>
<size>49597031</size>
</item><item>
<title>noaa-ncei-cmorph-high-resolution-global-precipitation-estimates</title>
<category>Dataset</category>
<infohash>84869b28e308afe79b872698af51f773d1344bff</infohash>
<guid>https://academictorrents.com/details/84869b28e308afe79b872698af51f773d1344bff</guid>
<link>https://academictorrents.com/details/84869b28e308afe79b872698af51f773d1344bff</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The Satellite Precipitation - CMORPH Climate Data Record (CDR) consists of satellite precipitation estimates that have been bias corrected and reprocessed using the the Climate Prediction Center (CPC) Morphing Technique (MORPH) to form a global, high resolution precipitation analysis. Data is reprocessed on a global grid with 8km-by-8km spatial resolution. Temporal resolution is 30 minutes over a 20 year period of record (January 1998Ã¢ÂÂpresent). The output precipitation fields are produced on three different time-space resolutions to accommodate a variety of user requirements. This data set is for the bias-corrected, reprocessed CPC Morphing technique (CMORPH) high-resolution global satellite precipitation estimates. The CMORPH satellite precipitation estimates are created in two steps. First, the purely satellite-based global fields of precipitation are constructed through integrating Level 2 retrievals of instantaneous precipitation rates from all available passive microwave (PMW_ measurements aboard low earth orbiting platforms. Bias in these integrated satellite precipitation estimates is then removed through comparison against CPC daily gauge analysis over land and adjustment against the Global Precipitation Climatology Project (GPCP) merged analysis of pentad precipitation over ocean. The bias corrected CMORPH satellite precipitation estimates are created on an 8kmx8km grid over the global domain from 60deg S to 60deg N and in a 30-minute interval from January 1, 1998. Due to the delay of some input data sets, this formal version (Version 1) bias corrected CMORPH is produced manually once a month at a latency of 3-4 months. For the CDR production, the bias corrected CMORPH generated at its native resolution of 8kmx8km / 30-minute is upscaled to form THREE sets of data files of different time/space resolution for improved user experience: a) the full-resolution CMORPH data Output variable: precipitation rate in mm/hour spatial resolution: 8kmx8km (at equator) spatial coverage: global (60S-60N) temporal resolution: 30min data period: January 1, 1998 to the present b) Hourly CMORPH Output variable: precipitation rate in mm/hour spatial resolution: 0.25deg lat/lon spatial coverage: global (60S-60N) temporal resolution: hourly data period: January 1, 1998 to the present c) Daily CMORPH Output variable: daily precipitation in mm/day Printed 2015-06-08 - Verify Document Currency Before Use 1 spatial resolution: 0.25deg lat/lon spatial coverage: global (60S-60N) temporal resolution: hourly data period: January 1, 1998 to the present (b) and (c) are derived from and quantitatively consistent with the CMORPH at its original resolution (a). Contains archive files, does not include access files (archive files should contain full data)</description>
<size>431782567936</size>
</item><item>
<title>noaa-ncei-international-comprehensive-ocean-atmosphere</title>
<category>Dataset</category>
<infohash>5acf89c8ead6fee0d3567da636ce2357cd4dc135</infohash>
<guid>https://academictorrents.com/details/5acf89c8ead6fee0d3567da636ce2357cd4dc135</guid>
<link>https://academictorrents.com/details/5acf89c8ead6fee0d3567da636ce2357cd4dc135</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. Authors: Freeman, E.; S.D. Woodruff; S.J. Worley; S.J. Lubker; E.C. Kent; W.E. Angel; D.I. Berry; P. Brohan; R. Eastman; L. Gates; W. Gloeden; Z. Ji; J. Lawrimore; N.A. Rayner; G. Rosenhagen; S.R. Smith; Gahtan, J.; K. R. Knapp; C. J. Schreck; H. J. Diamond; J. P. Kossin; M. C. Kruk; R.W. Reynolds; C. Wilkinson; S. Claesson; F. Koek; C. Marzin; D. Wheeler; Charpentier, E.; D.E. Harrison; J.R. Keeley; M. Mietus; M. Rutherford; V. Swail; H.F. Diaz; T. Arbetter; C. Folland; D. Parker; R. Saunders; V. Smolyanitsky; T. Yoshida; N. Lott; D. Dehenauw; T. Manabe; WMO (World Meteorological Organization); J.D. Elms; H.-J. Isemer; C. Hanson; K. Wolter; R.J. Slutz; R.L. Jenne; P.M. Steurer; J.D. Hiscox; D.H. Joseph, ICOADS Value-Added Database (IVAD), JCOMM Expert Team on Marine Climatology (ETMC), RECovery of Logbooks And International Marine data (RECLAIM) Project The International Comprehensive Ocean-Atmosphere Data Set (ICOADS) offers surface marine data spanning 1662-present, and simple gridded monthly summary products for 2ÃÂ° latitude x 2ÃÂ° longitude boxes back to 1800 (and 1ÃÂ°x1ÃÂ° boxes since 1960)Ã¢ÂÂthese data and products are freely distributed worldwide. As it contains observations from many different observing systems encompassing the evolution of measurement technology over hundreds of years, ICOADS is probably the most complete and heterogeneous collection of surface marine data in existence.</description>
<size>1920257131445</size>
</item><item>
<title>noaa-ncei-integrated-ocean-observing-system</title>
<category>Dataset</category>
<infohash>25026815b9a57fd843e0e475cab251424e0bd41c</infohash>
<guid>https://academictorrents.com/details/25026815b9a57fd843e0e475cab251424e0bd41c</guid>
<link>https://academictorrents.com/details/25026815b9a57fd843e0e475cab251424e0bd41c</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The IOOS Catalog (https://data.ioos.us/) is an open data portal containing IOOSÃ¢ÂÂ portfolio of oceanographic observations and forecast products provided by IOOSÃ¢ÂÂ 11 Regional Associations (RAs), functional Data Assembly Centers (DACs) such as the HF Radar DAC and Glider DAC, and IOOSÃ¢ÂÂ federal partners. The Catalog inventories all IOOS Data Management (DMAC)-compliant data access service endpoints provided by these entities in a single metadata repository, for discovery by end users. The Catalog is populated by ISO 19115 metadata records that describe the observations taken and forecast model outputs produced by the RAs and DACs using DMAC-recommended standard vocabularies and data formats wherever possible (netCDF-CF [1], ACDD [2]). The RAs and DACs publish their metadata to web accessible folders, or OGC CS-W services, and the Catalog harvests metadata from these locations on a daily basis. Because IOOS data provider metadata is often produced in an automated fashion by software reading native data file attributes (such as CF attribution in a netCDF file, for example), a daily automated harvest of the ISO XML metadata is necessary, in order to keep frequently varying information such as dataset time coverage current.</description>
<size>215289350408</size>
</item><item>
<title>noaa-ncei-ocean-carbon-acidification-datasystem</title>
<category>Dataset</category>
<infohash>e490a659449cc55cf59a8f32fbd2b230f63e7829</infohash>
<guid>https://academictorrents.com/details/e490a659449cc55cf59a8f32fbd2b230f63e7829</guid>
<link>https://academictorrents.com/details/e490a659449cc55cf59a8f32fbd2b230f63e7829</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. OCADS manages a wide range of ocean carbon and acidification data, including chemical, physical, and biological observations collected from research vessels, ships of opportunity, and uncrewed platforms, as well as laboratory experiment results, and model outputs. Additionally, OCADS serves as a repository for Global Ocean Observing System (GOOS) biogeochemistry Essential Ocean Variables (EOVs) that are closely related to ocean carbon and acidification research, e.g., oxygen, nutrients, transient tracers, and stable isotopes. As a new development, OCADS accepts submissions of data generated from marine Carbon Dioxide Removal (mCDR) and Ocean Alkalinity Enhancement (OAE) related research. The mission of OCADS is to work closely with our data partners to provide data management services that facilitate and support research on ocean carbon cycling and ocean acidification. This is accomplished through: Safeguarding data in a well-supported federal archive to ensure long-term accessibility (&gt;75 years) Serving as one of the world s leading providers of ocean carbon and acidification data, information, and products Providing data management support for quality control, synthesis, and data product development activities OCADS prioritizes a customer-centric approach, and is committed to gathering knowledge and expertise from the research community to improve its data management practices. One of our goals is to make ocean carbon and acidification data available through one portal. We welcome data submissions from researchers and organizations around the world. Metadata includes full files from metadata JSON, organized by accession number.</description>
<size>237432470755</size>
</item><item>
<title>ShitSpotter - 2025-04-20</title>
<category>Dataset</category>
<infohash>27a2512ae93298f75544be6d2d629dfb186f86cf</infohash>
<guid>https://academictorrents.com/details/27a2512ae93298f75544be6d2d629dfb186f86cf</guid>
<link>https://academictorrents.com/details/27a2512ae93298f75544be6d2d629dfb186f86cf</link>
<description>ShitSpotter dataset 2025-04-20 Previous version: https://academictorrents.com/details/ee8d2c87a39ea9bfe48bef7eb4ca12eb68852c49</description>
<size>64300091544</size>
</item><item>
<title>noaa-ncei-global-ocean-current-database</title>
<category>Dataset</category>
<infohash>1d928de44371e9931506e23123f8ea4aa465e3f4</infohash>
<guid>https://academictorrents.com/details/1d928de44371e9931506e23123f8ea4aa465e3f4</guid>
<link>https://academictorrents.com/details/1d928de44371e9931506e23123f8ea4aa465e3f4</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The Global Ocean Current Database (GOCD) integrates ocean current data from a wide variety of capture methods, resolutions, and formats into a single format (NetCDF) archive. The GOCD is a valuable resource that gives scientists and researchers a comprehensive depiction of global current activity and structure. It also allows ocean modelers, ocean resource managers, and the shipping industry quantify the impact of currents on their operations.</description>
<size>4919880263</size>
</item><item>
<title>noaa-ncei-sea-ice-concentration-cdr</title>
<category>Dataset</category>
<infohash>2fe6faecc941732c6a02702226743b556e015ab3</infohash>
<guid>https://academictorrents.com/details/2fe6faecc941732c6a02702226743b556e015ab3</guid>
<link>https://academictorrents.com/details/2fe6faecc941732c6a02702226743b556e015ab3</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The Sea Ice Concentration Climate Data Record (CDR) is a set of consistent, daily and monthly sea ice concentration data time series. Data is available for both the north and south Polar Regions on a 25 km x 25 km grid from 1978ÃÂ¢ÃÂÃÂpresent. These data can be used to estimate how much of the ocean surface is covered by ice and monitor changes in sea ice concentration. This CDR combines concentration estimates using two algorithms developed at the NASA Goddard Space Flight Center (GSFC). Gridded brightness temperature data inputs come from the following sources: Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) Defense Meteorological Satellite Program (DMSP) series of passive microwave radiometers Special Sensor Microwave Imager (SSM/I) Special Sensor Microwave Imager/Sounder (SSMIS)</description>
<size>2746264799</size>
</item><item>
<title>noaa-ncei-world-ocean-database</title>
<category>Dataset</category>
<infohash>c0dce33ade7d0f828a542d5bed069b8909b3ee87</infohash>
<guid>https://academictorrents.com/details/c0dce33ade7d0f828a542d5bed069b8909b3ee87</guid>
<link>https://academictorrents.com/details/c0dce33ade7d0f828a542d5bed069b8909b3ee87</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The World Ocean Database (WOD) is the world s largest publicly available uniform format quality controlled ocean profile dataset. Ocean profile data are sets of measurements of an ocean variable vs. depth at a single geographic location within a short (minutes to hours) temporal period in some portion of the water column from the surface to the bottom. To be considered a profile for the WOD, there must be more than a single depth/variable pair. Multiple profiles at the same location from the same set of instruments is an oceanographic cast. Ocean variables in the WOD include temperature, salinity, oxygen, nutrients, tracers, and biological variables such as plankton and chlorophyll. Quality control procedures are documented and performed on each cast and the results are included as flags on each measurement. The WOD contains the data on the originally measured depth levels (observed) and also interpolated to standard depth levels to present a more uniform set of iso-surfaces for oceanographic and climate work. The source of the WOD is more than 20,000 separate archived datasets contributed by institutions, project, government agencies, and individual investigators from the United States and around the world. Each dataset is available in its original form in the National Centers for Environmental Information data archives. All datasets are converted to the same standard format, checked for duplication within the WOD, and assigned quality flags based on objective tests. Additional subjective flags are set upon calculation of ocean climatological mean fields which make up the World Ocean Atlas (WOA) series. The WOD consists of periodic major releases and quarterly updates to those releases. Each major release is associated with a concurrent release of a WOA release, and contains final quality control flags used in the WOA, which includes manual as well as automated steps. Each quarterly update release includes additional historical and recent data and preliminary quality control. The latest major release was WOD 2018 (WOD18), which includes nearly 16 million oceanographic casts, from the second voyage of Captain Cook (1772) to the modern Argo floats (end of 2017). The WOD presents data in netCDF ragged array format following the Climate and Forecast (CF) conventions for ease of use mindful of space limitations.</description>
<size>178109511503</size>
</item><item>
<title>noaa-ncei-us-climate-reference-network</title>
<category>Dataset</category>
<infohash>362f3e47404d04b589a376b39ab7ee65b19b873d</infohash>
<guid>https://academictorrents.com/details/362f3e47404d04b589a376b39ab7ee65b19b873d</guid>
<link>https://academictorrents.com/details/362f3e47404d04b589a376b39ab7ee65b19b873d</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The U.S. Climate Reference Network (USCRN) is a systematic and sustained network of climate monitoring stations with sites across the contiguous U.S., Alaska, and Hawaii. These stations use high-quality instruments to measure temperature, precipitation, soil conditions, and more. USCRN provides a continuous series of climate observations to monitor national climate trends and support climate-impact research.</description>
<size>62282230852</size>
</item><item>
<title>noaa-ncei-sudden-stratospheric-warming-compendium</title>
<category>Dataset</category>
<infohash>061b946dfaf3e20e08ef5214ab7f6413983205af</infohash>
<guid>https://academictorrents.com/details/061b946dfaf3e20e08ef5214ab7f6413983205af</guid>
<link>https://academictorrents.com/details/061b946dfaf3e20e08ef5214ab7f6413983205af</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. Sudden Stratospheric Warming Compendium (SSWC) data set documents the stratospheric, tropospheric, and surface climate impacts of sudden stratospheric warmings. This examines major mid-winter warmings, as defined by a reversal between November-March of the zonal winds at 10 hPa and 60N from westerly to easterly. The one major warming that occurred in the Southern Hemisphere is also included. Analyses are available from 6 different reanalyses: MERRA2 (1980-2014), JRA-55 (1958-2014), ERA-interim (1979-2014), ERA-40 (1958-2002), NOAA20CR (1958-2011), and NCEP-NCAR I (1958-2014). Global gridded anomaly fields are calculated from smoothed daily climatologies based on the full record of each reanalysis. Data is provided 60 days prior to and after each SSW event. Pressure-level fields include winds, temperatures, geopotential heights, vorticity and potential vorticity, and ozone; surface fields include temperatures, sea-level pressures, and precipitation. Other derived data and climate indices are also included. Data are provided as daily means, with 2.5x2.5 degree horizontal resolution for pressure-level fields and native horizontal resolution for surface fields. Daily climatological means and standard deviations of all input fields are included with the output.</description>
<size>357811903048</size>
</item><item>
<title>noaa-ncei-solar-spectral-irradiance</title>
<category>Dataset</category>
<infohash>d4f39bcbef17400329f4e14304df3ef451bea890</infohash>
<guid>https://academictorrents.com/details/d4f39bcbef17400329f4e14304df3ef451bea890</guid>
<link>https://academictorrents.com/details/d4f39bcbef17400329f4e14304df3ef451bea890</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The Solar Spectral Irradiance (SSI) Climate Data Record (CDR) tracks the solar energy reaching the top of Earth s atmosphere at different wavelengths, measured in Watts per square meter per nanometer (W m-2 nm-1) at a base mean distance from the Sun (i.e., one Astronomical Unit). This record is crucial for space weather, space climate, and Earth climate modeling, as well as for studying atmospheric chemistry and dynamics. The SSI CDR includes two data products: Broad Spectrum (0 to 200,000 nm): This product covers a wide range of wavelengths and is widely used in climate models. It is based on Version 1 of the NASA NOAA LASP (NNL) solar variability models. These models estimate changes in solar irradiance due to faculae and sunspots, as well as the F10.7 cm solar radio flux. The data is available in 4,300 variable-width wavelength bands. High-Resolution Spectrum (115 to 500 nm): This product offers higher spectral resolution, ideal for atmospheric studies. It also uses NNL models but applies a different set of coefficients to estimate irradiance changes and is consistent in magnitude and variability with the Broad Spectrum product. The data is available in 9,700 variable-width wavelength bands. The products have been validated with measurements from the Total and Spectral Irradiance Sensor (TSIS-1) mission and, at extreme ultraviolet wavelengths, the Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) mission. The records include daily and monthly data from 1874 to the present, and yearly data from 1610 to the present.</description>
<size>5171890791</size>
</item><item>
<title>noaa-ncei-sea-surface-temperature-extended-reconstructed</title>
<category>Dataset</category>
<infohash>fe82920872b8d7204721ad154973a17e5d59238d</infohash>
<guid>https://academictorrents.com/details/fe82920872b8d7204721ad154973a17e5d59238d</guid>
<link>https://academictorrents.com/details/fe82920872b8d7204721ad154973a17e5d59238d</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The Extended Reconstructed Sea Surface Temperature (ERSST) dataset is a global monthly analysis of SST data derived from the International Comprehensive OceanÃ¢ÂÂAtmosphere Dataset (ICOADS). The dataset can be used for long-term global and basin-wide studies and incorporates smoothed local and short-term variations. The NOAA Global Surface Temperature (NOAA GlobalTemp) product integrates ERSST data with land surface air temperature from the Global Historical Climatology Network-Monthly dataset to create integrated surface temperature analyses.</description>
<size>34577014</size>
</item><item>
<title>noaa-ncei-international-best-track-archive-for-climate-stewardship-ibtracs</title>
<category>Dataset</category>
<infohash>2ff9743464c285a24346b29716648aced930e8e2</infohash>
<guid>https://academictorrents.com/details/2ff9743464c285a24346b29716648aced930e8e2</guid>
<link>https://academictorrents.com/details/2ff9743464c285a24346b29716648aced930e8e2</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The International Best Track Archive for Climate Stewardship (IBTrACS) project is the most complete global collection of tropical cyclones available. It merges recent and historical tropical cyclone data from multiple agencies to create a unified, publicly available, best-track dataset that improves inter-agency comparisons. IBTrACS was developed collaboratively with all the World Meteorological Organization (WMO) Regional Specialized Meteorological Centres, as well as other organizations and individuals from around the world.</description>
<size>1725715089</size>
</item><item>
<title>noaa-ncei-surface-underway-marine-database</title>
<category>Dataset</category>
<infohash>7304161afb57776cedf3b9bf71d1c89f3d956820</infohash>
<guid>https://academictorrents.com/details/7304161afb57776cedf3b9bf71d1c89f3d956820</guid>
<link>https://academictorrents.com/details/7304161afb57776cedf3b9bf71d1c89f3d956820</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. This collection contains global in-situ sea surface underway measurements from the NOAA NCEI Surface Underway Marine Database (NCEI-SUMD, formerly the NCEI Thermosalinograph Database). The database was originally developed to facilitate understanding and access to a set of quality controlled in-situ sea surface temperature (SST) and salinity (SSS) measurements collected by thermosalinographs (TSG). The database was later expanded to include surface underway non-TSG data, such as meteorological data from the vessel-mounted meteorological packages, microplastics data and data from Unmanned Surface vehicles, such as Saildrones and Wave Gliders. Data were in situ underway data collected by thermosalinographs (TSG), meteorological packages and other sensors from 1989 to the present from more than 450 platforms. These data are from multiple data assembly centers, including the Center for Ocean-Atmospheric Prediction Studies (COAPS; TSG data from the Shipboard Automated Meteorological and Oceanographic System (SAMOS)), Institut franÃÂ§ais de recherche pour l exploitation de la mer (IFREMER; English: French Research Institute for Exploitation of the Sea; TSG data from The Global Ocean Surface Underway Data (GOSUD)), Atlantic Oceanographic &amp; Meteorological Laboratory (AOML-TSG data) and NOAA National Centers for Environmental Information (NCEI). When duplicate data were found with different resolution, the data with the highest sampling resolution were selected. All data were converted to common netCDF format, following the Climate and Forecast (CF) and Attribute Convention for Data Discovery (ACDD) conventions and following the NCEI netCDF 2.0 trajectory feature type. All data were processed using the same 11-step quality control procedures and criteria and flagged using a two-level flag system to provide a well-organized, uniformly quality-controlled TSG dataset for the user community.</description>
<size>9261352503</size>
</item><item>
<title>noaa-ncei-coral-reef-information-system</title>
<category>Dataset</category>
<infohash>f750dafb20b1f5783a9e602793d8e917fcf7e95f</infohash>
<guid>https://academictorrents.com/details/f750dafb20b1f5783a9e602793d8e917fcf7e95f</guid>
<link>https://academictorrents.com/details/f750dafb20b1f5783a9e602793d8e917fcf7e95f</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. CoRIS is the Coral Reef Conservation Program s (CRCP) information portal that provides access to NOAA coral reef information and data products with emphasis on the U.S. states, territories and remote island areas. NOAA Coral Reef activities include coral reef mapping, monitoring and assessment; natural and socioeconomic research and modeling; outreach and education; and management and stewardship.</description>
<size>88789128742</size>
</item><item>
<title>noaa-ncei-land-leaf-area-index-and-fapar</title>
<category>Dataset</category>
<infohash>a72cbf8fd10a933b9bed7dc0c3769a3ef195c23a</infohash>
<guid>https://academictorrents.com/details/a72cbf8fd10a933b9bed7dc0c3769a3ef195c23a</guid>
<link>https://academictorrents.com/details/a72cbf8fd10a933b9bed7dc0c3769a3ef195c23a</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. This Climate Data Record (CDR) combines datasets for Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), two biophysical variables that can be used to evaluate vegetation stress, forecast agricultural yields, and other modeling and resource management applications. LAI tracks the one-sided green leaf area per unit of ground surface area, while FAPAR quantifies the solar radiation absorbed by plants within the photosynthetically active radiation (PAR) spectral region. The LAI/FAPAR CDR generates a daily product on a .05ÃÂ° by .05ÃÂ° grid using data derived from Advanced Very High Resolution Radiometer (AVHRR) sensors from 1981Ã¢ÂÂ2013 and from the Visible Infrared Imaging Radiometer Suite (VIIRS) sensors from 2014 to 10 days before the present.</description>
<size>117780833610</size>
</item><item>
<title>nooa-ncei-ndbc-tao-buoy-combined</title>
<category>Dataset</category>
<infohash>af1e0b2d2e615f32787840714976ba95439a0b42</infohash>
<guid>https://academictorrents.com/details/af1e0b2d2e615f32787840714976ba95439a0b42</guid>
<link>https://academictorrents.com/details/af1e0b2d2e615f32787840714976ba95439a0b42</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. Contains three separate TAO datasets: Physical and meteorological data from the Tropical Atmosphere Ocean (TAO) array in the tropical Pacific Ocean The Tropical Atmosphere Ocean (TAO) Array of 55 moored buoys spans the tropical Pacific from longitudes 165ÃÂ°E to 95ÃÂ°W between latitudes of approximately 8ÃÂ°S and 9ÃÂ°N. Moorings within the array measure surface meteorological and upper-ocean parameters and transmit most data in real time to shore via Service Argos. The array was part of the in-situ measurement portion of the Tropical Ocean-Global Atmosphere (TOGA) Program, a 10-year (1985 - 1994) study of climate variability on seasonal to interannual time scales, the most pronounced mode of which is the El NiÃÂ±o/Southern Oscillation (ENSO) phenomenon (McPhaden, 1993). Physical and meteorological delayed-mode full-resolution data from the Tropical Atmosphere Ocean (TAO) array in the Equatorial Pacific The Tropical Atmosphere Ocean (TAO) array of moored buoys spans the tropical Pacific. Moorings within the array measure surface meteorological and upper-ocean parameters. This collection contains full-resolution, delayed-mode data, which the National Buoy Data Center processed and submitted to NODC in netCDF-formatted files. Physical profile data collected in the Equatorial Pacific during cruises to service the TAO array, a network of deep ocean moored buoys, from 2007-04-07 to the present As part of the Tropical Atmosphere Ocean (TAO) Program, the National Data Buoy Center (NDBC) was responsible for the at-sea collection, quality control and processing, and delivery of these CTD data in netCDF files to the National Oceanographic Data Center (NODC). NDBC collected these CTD data during cruises to service the TAO array of moorings.</description>
<size>13820135032</size>
</item><item>
<title>noaa-ncei-ndbc-coastal-marine-automated-network</title>
<category>Dataset</category>
<infohash>08b58a2dba6c43e3871dd312e1e6228f9cc62b06</infohash>
<guid>https://academictorrents.com/details/08b58a2dba6c43e3871dd312e1e6228f9cc62b06</guid>
<link>https://academictorrents.com/details/08b58a2dba6c43e3871dd312e1e6228f9cc62b06</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. The National Data Buoy Center (NDBC) established the Coastal-Marine Automated Network (C-MAN) for the National Weather Service in the early 1980 s. NDBC has installed approximately 50 C-MAN stations on lighthouses, at capes and beaches, on near shore islands, and on offshore platforms. NDBC has also deployed over 100 moored (a.k.a., weather) buoys in coastal and offshore waters from the western Atlantic to the Pacific Ocean around Hawaii, and from the Bering Sea to the South Pacific. C-MAN and moored buoy data typically include barometric pressure, wind direction, speed and gust, and air temperature; however, some C-MAN stations are equipped to also measure seawater temperature, water level, waves, and relative humidity. Moored buoys measure wave energy spectra from which NDBC derives significant wave height, dominant wave period, and average wave period. In addition, many moored buoys measure the direction of wave propagation. In collaboration, NDBC and the National Centers for Environmental Information (NCEI) &amp;mdash; formerly the National Oceanographic Data Center (NODC) &amp;mdash; are archiving these data from C-MAN and moored buoys. This collection is part of the collaboration and it contains both NODC F291 and netCDF (version 4) files with data collected from February 1970 through the present day.</description>
<size>27178991309</size>
</item><item>
<title>noaa-ncei-world-ocean-atlas-23-data</title>
<category>Dataset</category>
<infohash>581bb9462e958df6011025b26fa1d609a2dafd73</infohash>
<guid>https://academictorrents.com/details/581bb9462e958df6011025b26fa1d609a2dafd73</guid>
<link>https://academictorrents.com/details/581bb9462e958df6011025b26fa1d609a2dafd73</link>
<description>CORRECTED AND UPDATED VERSION: may break previous downloaded versions. World Ocean Atlas 2023 (WOA23) is a set of objectively analyzed (one degree grid and quarter degree grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen Utilization (AOU), percent oxygen saturation, phosphate, silicate, and nitrate at standard depth levels for annual, seasonal, and monthly compositing periods for the World Ocean. Quarter degree fields are for temperature and salinity only. It also includes associated statistical fields of observed oceanographic profile data interpolated to standard depth levels on quarter degree, one degree, and five degree grids. Temperature and salinity fields are available for seven decades (1955-1964, 1965-1974, 1975-1984, 1985-1994, 1995-2004, 2005-2014, and 2015-2022), for three thirty year "climate normal" periods (1971-2000, 1981-2010, and 1991-2020), and for an average of the seven individual decades (1955-2022). Oxygen fields (as well as AOU and percent oxygen saturation) are available using all quality controlled bottle, CTD, and profiling float data from 1965-2022 as well as a thirty year "climate normal" using bottle and CTD data for 1971-2000. Nutrient fields are available using all quality controlled bottle data from the entire sampling period between 1965-2022. This accession is a product generated by the National Centers for Environmental Information s (NCEI) Ocean Climate Laboratory Team. The analyses are derived from the NCEI World Ocean Database 2023.</description>
<size>663980525488</size>
</item><item>
<title>Gumroad | Plasticity - Industrial Design Ultimate Bundle</title>
<category>Course</category>
<infohash>87a158a0b172a8d58a5b364287e000466b662ba8</infohash>
<guid>https://academictorrents.com/details/87a158a0b172a8d58a5b364287e000466b662ba8</guid>
<link>https://academictorrents.com/details/87a158a0b172a8d58a5b364287e000466b662ba8</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/S4kZyw6P/plasticity-3d-modeling-course.png Gumroad - Plasticity - Industrial Design Ultimate Bundle Course details Learn how to master Surface Modeling in Plasticity with Industrial Product Design. Tap into your potential with this Plasticity course by experiencing cutting-edge industrial design and advanced surface modeling. Why focus on power tools? They offer the perfect blend of form and intricate details. Far from just another course, this is your transformative guide to pushing the limits of Plasticity. In this course, you will: Unparalleled Early Access Be among the first to explore the most advanced Plasticity modeling techniques, right from the software s early stages. Perfect Surface Modeling Learn the best surface modeling techniques even faster and easier. Ultra-Efficient Complex Modeling Master the art of creating intricate industrial design through highly efficient workflows, making a dramatic impact on your design speed. Achieve Unmatched Excellence Complete the course with the skills to create models that are not just exceptional but also highlight a level of craftsmanship and aesthetics that sets you apart from 98% of people. Ultimate Comprehensive Learning Experience a perfectly curated &amp; focused course, bringing your skills to new heights. General Details: Duration: 11h 8m 36s Updated: 04/2025 Language: English Subtitle: .SRT Included Source: https://nikitakapustin1.gumroad.com/l/industrial-design-course MP4 | Video: AVC, 1920x1080p | Audio: AAC, 48.000 KHz, 2 Ch | Included Course Material</description>
<size>4222359215</size>
</item><item>
<title>The Great Courses Plus | Great Questions of Philosophy and Physics</title>
<category>Course</category>
<infohash>37d2a5e4638074ceec95460709f871c878184aec</infohash>
<guid>https://academictorrents.com/details/37d2a5e4638074ceec95460709f871c878184aec</guid>
<link>https://academictorrents.com/details/37d2a5e4638074ceec95460709f871c878184aec</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/pkp0FdJ/0924.png The Great Courses Plus - Great Questions of Philosophy and Physics Course details No field of the humanities is so closely tied to physics as philosophy. Since ancient times, philosophers have puzzled over the nature of space, time, and matter—inquiries that led to the flowering of physics in the 17th century with Isaac Newton and other pioneers of the Scientific Revolution. Since then, the spectacular success of modern physics might imply that philosophy is no longer relevant to the field. Far from it! Surprising discoveries in the atomic and cosmic realms have opened a floodgate of new philosophical questions, such as: - Is time travel possible? Time travel is an idea that would have seemed absurd to a classical physicist like Newton. But it appears to be a real option according to Albert Einstein’s general theory of relativity—raising the prospect of time machines, along with a host of paradoxes including whether the past can be altered. - Is the universe fine-tuned for life? The more we learn about the universe, the more it looks tailor-made expressly for us. Does this imply a Creator? On the other hand, where else could we live except in a universe conducive to life? This suggests that countless other universes may exist with quite different properties. - Is Schrödinger s cat dead or alive? A thought experiment proposed by the physicist Erwin Schrödinger features a cat whose life hangs in the balance subject to a quantum event, which is inherently probabilistic and unobservable. The implications have led to startling proposals about the nature of reality. Treating these and other puzzles with a light and accessible touch, award-winning teacher and philosopher Steven Gimbel of Gettysburg College guides you through the concepts, theories, and speculations that underlie our understanding of reality in The Great Questions of Philosophy and Physics. In 12 wide-ranging, half-hour lectures, Professor Gimbel covers many of the fundamental ideas of modern physics, highlighting the role of philosophy in setting ground rules, interpreting results, and posing new questions. The only prerequisite for the course is a desire to think critically and abstractly—in other words, philosophically. No prior background in science, mathematics, or philosophy is assumed. Trained as a philosopher of physics, Professor Gimbel deftly introduces the major players, sketches the intellectual terrain, and outlines the most important debates. He also tells a few jokes, displaying the playful side of his profession. Wrestle with Profound Questions Dr. Gimbel’s humor is on display when he brings up the topic of atoms. “Do atoms exist?” he often asks his classes. “Of course they do,” his students invariably tell him. “What about Santa Claus?” Dr. Gimbel counters. “Does he exist?” The point is that our evidence for atoms is indirect, much like the clues for Santa’s visit (packages under the tree and missing cookies). While the analogy should not be stretched too far, there is a long tradition in the philosophy of science that regards unobservables as being metaphysically out of bounds. This view is called empiricism. There is an equally venerable tradition, called realism, that views strong evidence for entities such as atoms as proof of their existence, in spite of the fact that they can’t be observed directly. In The Great Questions of Philosophy and Physics, you wrestle with tricky debates like this, assessing the arguments on both sides. Inevitably, you will find yourself persuaded by one position and then having second thoughts when you hear the arguments against it, which is a mark of the subtlety of the underlying philosophical issues. Consider these questions, which you address in the course: - Why is math so effective? Mathematics is the hallmark of a rigorous science such as physics. But why should that be? Is the world a mathematical system, as some philosophers contend? Or is mathematics simply our most powerful, logical tool for making sense of the relations between things in nature? - Is space a thing or a relation? Newton believed that space is a kind of amphitheater that the universe occupies. His rival, Leibniz, argued that space is just a set of relations. If the contents of the universe were removed, there would be nothing left, not even “space.” Einstein cast this debate in a remarkable new light. - What is scientific truth? Philosopher Nancy Cartwright points out that the laws of physics are idealized and do not describe reality. If fundamental laws don’t lead us to the truth, then what does? It may be that we have to settle for statements that are true enough to give the best explanation—and no more. General Details: Duration: 6h 13m 46s Updated: 04/2025 Language: English Source: https://www.thegreatcourses.com/courses/the-great-questions-of-philosophy-and-physics MP4 | Video: AVC, 854x480p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>3521470699</size>
</item><item>
<title>ZeroToMastery | Learning to Make Better Decisions [Decision-Making]</title>
<category>Course</category>
<infohash>6b964efbe6c40b1e3e86cbc87a53bd0b323cffe4</infohash>
<guid>https://academictorrents.com/details/6b964efbe6c40b1e3e86cbc87a53bd0b323cffe4</guid>
<link>https://academictorrents.com/details/6b964efbe6c40b1e3e86cbc87a53bd0b323cffe4</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/fYP8zn4P/Blue65.png ZeroToMastery - Learning to Make Better Decisions [Decision-Making] Course details Discover the psychology behind decision-making! Learn to overcome biases, tackle uncertainty, and master thinking strategies for better choices in work and life. Gain real-world insights to make empowered decisions. What you ll learn - Understand the Monty Hall Problem to refine problem-solving skills - Identify and overcome cognitive biases like Narrow Framing and Loss Aversion - Balance intuitive (System 1) and analytical (System 2) thinking - Tackle uncertainty with confidence to seize new opportunities - Analyze the Halo Effect and its influence on impressions - Gain inspiration from real-world cases like Netflix s HR reinvention - Apply decision-making strategies in the Tentacle Tents project - Enhance personal and professional decision-making abilities General Details: Duration: 42m 11s Updated: 04/2025 Language: English Subtitle: .SRT Included Source: https://zerotomastery.io/courses/learn-decision-making/ MP4 | Video: AVC, 1920x1080p | Audio: AAC, 48.000 KHz, 2 C</description>
<size>199479489</size>
</item><item>
<title>noaa-ncei-estuarine-bathymetric-digital-elevation-models</title>
<category>Dataset</category>
<infohash>9840fa77d032a3821180f7f1726df02431e37e53</infohash>
<guid>https://academictorrents.com/details/9840fa77d032a3821180f7f1726df02431e37e53</guid>
<link>https://academictorrents.com/details/9840fa77d032a3821180f7f1726df02431e37e53</link>
<description>The National Ocean Service (NOS) Estuarine Bathymetric Digital Elevation Models are gridded bathymetry datasets interpolated from 150 yearsâ worth of hydrographic survey data collected by the former NOS Special Projects Office. The initiative produced datasets for 70 estuaries in the conterminous United States with sufficient data coverage to support detailed bathymetric processing. Contact dem.info@noaa.gov Use Cases Bathymetric data adds another dimension to geographic mapping and modeling, and can be used as either a background layer or 3D surface for draping thematic maps such as benthic or marine organism habitats and geologic data. It also delineates the lower boundary of the water column in hydrodynamic models used to monitor and predict the movement of oil and hazardous materials, temperature and salinity distributions, animal migration patterns, and model storm surge and tsunami effects. Data Characteristics The perimeter of each bathymetry aligns with estuarine drainage area water boundaries determined by the NOS Coastal Assessment Framework. Elevations do not extend beyond the high water line. Bathymetric depths are vertically referenced to the local tidal datum, typically Mean Lowest Low Water (MLLW) averaged over a 19 year tidal epoch. There are no plans to produce bathymetries for other estuaries, but some existing bathymetries are updated as part of Coastal Digital Elevation Models, which can accessed through the Bathymetric Data Viewer application.</description>
<size>5264391292</size>
</item><item>
<title>noaa-ngdc-antarctic-paleobathymetry</title>
<category>Dataset</category>
<infohash>2add0421e5c67ff0caccedb8a4b45842acf5cc51</infohash>
<guid>https://academictorrents.com/details/2add0421e5c67ff0caccedb8a4b45842acf5cc51</guid>
<link>https://academictorrents.com/details/2add0421e5c67ff0caccedb8a4b45842acf5cc51</link>
<description>This software package models the paleobathymetry of the circum-Antarctic oceans back to the Late Cretaceous. It is based on a revised tectonic model of the circum-Antarctic region, and incorporates features such as spatially variable subsidence rates, refined rotation poles and a detailed treatment of selected areas. The software output consists of color-coded maps at user-specified Cenozoic ages and the associated gridded paleobathymetry for all oceans lying south of 30Â°S. Citation Hayes, D.E., C. Zhang and R.A. Weissel (2009), Modeling Paleobathymetry in the Southern Ocean. Eos, Vol. 90, No. 19, 12 May 2009, p. 165-166. doi: 10.1029/2009EO190001 Methodology This model approximates the composite effects of sedimentation through loading and burial while assuming a constant sedimentation rate at each grid point. It differs from that of Brown, et al., 2006 because it does not assume uniform cooling-induced subsidence throughout the entire region being modelled. Technique There are many steps involved in reconstructing paleobathymetry for any specified time. First, plates are rotated back to their position at the target time and oceanic crust younger than the target age is removed. Next, the residual present-day bathymetry must be adjusted to account for the effects of crustal subsidence, sedimentation and eustatic sea level changes. In some places, crust that existed at the target age has since been subducted and must be resurrected. The user may accept our default parameters or enter new values at runtime for parameters such as subsidence. The user may also alter basic input data such as bathymetry, crustal age, sediment thickness and stage poles by creating new files. See Appendix A for the required file formats.</description>
<size>554851746</size>
</item><item>
<title>noaa-ncei-outgoing-longwave-radiation-daily</title>
<category>Dataset</category>
<infohash>bb21a17b9a733c2e618724a45721d69136e6d8b6</infohash>
<guid>https://academictorrents.com/details/bb21a17b9a733c2e618724a45721d69136e6d8b6</guid>
<link>https://academictorrents.com/details/bb21a17b9a733c2e618724a45721d69136e6d8b6</link>
<description>The daily Outgoing Longwave Radiation (OLR) Climate Data Record (CDR) measures the amount of terrestrial radiation released into space and, by extension, the amount of cloud cover and water vapor that intercepts that radiation in the atmosphere. Input data for the Daily OLR record primarily comes from the high-resolution infrared radiation sounder (HIRS) radiance observations since 1979, and new generation infrared hyperspectral sounder radiance data, including from the Infrared Atmospheric Sounding Interferometer (IASI) since 2007 and the Cross-track Infrared Sounder (CrIS) since 2012. The OLR is also retrieved from the combination of operational geostationary Imagers that helps to achieve better accuracy for the OLR daily integral. The final record is generated through a combination of statistical techniques, including OLR regression, instrument ambient temperature prediction coefficients, and inter-satellite bias corrections. The daily OLR record spans from 1979 - present.</description>
<size>4466381467</size>
</item><item>
<title>nsf-awards-bulk-download</title>
<category>Dataset</category>
<infohash>a32441812ee3867866ed39a1e1229c38d1454963</infohash>
<guid>https://academictorrents.com/details/a32441812ee3867866ed39a1e1229c38d1454963</guid>
<link>https://academictorrents.com/details/a32441812ee3867866ed39a1e1229c38d1454963</link>
<description>Bulk yearly download of NSF Award JSON records.</description>
<size>3697348206</size>
</item><item>
<title>noaa-ncei-geological-history-ocean-crust</title>
<category>Dataset</category>
<infohash>d07b78b8e23687a216784970ca627463c0af6f8b</infohash>
<guid>https://academictorrents.com/details/d07b78b8e23687a216784970ca627463c0af6f8b</guid>
<link>https://academictorrents.com/details/d07b78b8e23687a216784970ca627463c0af6f8b</link>
<description>This page contains four companion digital models of the age, age uncertainty, spreading rates and spreading asymmetries of the world s ocean basins as geographic and Mercator grids with two-minute resolution. The grids include data from all the major ocean basins as well as detailed reconstructions of back-arc basins. The age, spreading rate, and asymmetry at each grid node is determined by linear interpolation between adjacent seafloor isochrons in the direction of spreading.  Ages for ocean floor between the oldest identified magnetic anomalies and continental crust are interpolated by geological estimates of the ages of passive continental margin segments. The age uncertainties for grid cells coinciding with marine magnetic anomaly identifications, observed or rotated to their conjugate ridge flanks, are based on the difference between gridded age and observed age. The uncertainties are also a function of the distance of a given grid cell to the nearest age observation, and the proximity to fracture zones or other age discontinuities. Asymmetries in crustal accretion appear to be frequently related to asthenospheric flow from mantle plumes to spreading ridges, resulting in ridge jumps towards hotspots. The authors use the new age grid to compute global residual basement depth grids from the difference between observed oceanic basement depth and predicted depth using two alternative age-depth relationships. The new set of grids helps investigate prominent negative depth anomalies, which may be alternatively related to subducted slab material descending in the mantle or to asthenospheric flow. A combination of these digital grids and the associated relative and absolute plate motion model with seismic tomography and mantle convection model outputs represent a valuable set of tools to investigate geodynamic problems.</description>
<size>2581710005</size>
</item><item>
<title>noaa-ngdc-global-self-consistent-hierarchical-high-resolution-geography-database</title>
<category>Dataset</category>
<infohash>c2761718707de6963191e7e20855e8a90bdd9a8d</infohash>
<guid>https://academictorrents.com/details/c2761718707de6963191e7e20855e8a90bdd9a8d</guid>
<link>https://academictorrents.com/details/c2761718707de6963191e7e20855e8a90bdd9a8d</link>
<description>Global Self-consistent, Hierarchical, High-resolution Geography Database (GSHHG) is a high-resolution geography data set, amalgamated from two databases: World Vector Shorelines (WVS) and CIA World Data Bank II (WDBII). The former is the basis for shorelines while the latter is the basis for lakes, although there are instances where differences in coastline representations necessitated adding WDBII islands to GSHHG. The WDBII source also provides political borders and rivers. GSHHG data have undergone extensive processing and should be free of internal inconsistencies such as erratic points and crossing segments. The shorelines are constructed entirely from hierarchically arranged closed polygons. GSHHG combines the older GSHHS shoreline database with WDBII rivers and borders, available in either ESRI shapefile format or in a native binary format. Geography data are in five resolutions: crude(c), low(l), intermediate(i), high(h), and full(f). Shorelines are organized into four levels: boundary between land and ocean (L1), boundary between lake and land (L2), boundary between island-in-lake and lake (L3), and boundary between pond-in-island and island (L4). Datasets are in WGS84 geographic (simple latitudes and longitudes; decimal degrees). GSHHG is released under the GNU Lesser General Public license, and is developed and maintained by Dr. Paul Wessel, SOEST, University of Hawai i, and Dr. Walter H. F. Smith, NOAA Laboratory for Satellite Altimetry. Please notify Dr. Paul Wessel and Dr. Walter H.F. Smith if any changes are made to the GSHHG data set for commercial use.</description>
<size>326536212</size>
</item><item>
<title>noaa-ncei-sediment-thickness-v3</title>
<category>Dataset</category>
<infohash>e4bb6d9ef21381f7a730769dc060f0104af0436e</infohash>
<guid>https://academictorrents.com/details/e4bb6d9ef21381f7a730769dc060f0104af0436e</guid>
<link>https://academictorrents.com/details/e4bb6d9ef21381f7a730769dc060f0104af0436e</link>
<description>The Total Sediment Thickness database for the World s Oceans and Marginal Seas is a compilation of sediment-thickness data from previously published isopach maps, ocean drilling results from the Ocean Drilling Program (ODP) and the Deep Sea Drilling Project (DSDP), and variety of seismic data. About Version 3 The global 5âarcâminute total sediment thickness grid, GlobSed, incorporates data and several regional oceanic sediment thickness maps for: NE Atlantic (Funck et al., 2017; Hopper et al., 2014) Mediterranean (Molinari &amp; Morelli, 2011) Arctic (Petrov et al., 2016) Weddell Sea (Huang et al., 2014) Ross Sea, Amundsen Sea, and Bellingshausen Sea sectors off West Antarctica (Lindeque et al., 2016; Wobbe et al., 2014). This version also includes updates in the White Sea region based on the Russian Geological Research Institute (VSEGEI) map of Orlov and Fedorov (2001). GlobSed covers a larger area than NCEIâs previous global grids (Divins, 2003; Whittaker et al. 2013), and the new updates results in a 29.7% increase in estimated total oceanic sediment volume.</description>
<size>64638406</size>
</item><item>
<title>noaa-ncei-ngdc-natural-hazards</title>
<category>Dataset</category>
<infohash>716d87712416681a50d6269e3f3d2ebd56b4ff2d</infohash>
<guid>https://academictorrents.com/details/716d87712416681a50d6269e3f3d2ebd56b4ff2d</guid>
<link>https://academictorrents.com/details/716d87712416681a50d6269e3f3d2ebd56b4ff2d</link>
<description>NCEI archives and assimilates tsunami, earthquake and volcano data to support research, planning, response and mitigation. Long-term data, including photographs, can be used to establish the history of natural hazard occurrences and help mitigate against future events.  Includes these datasets: - NCEI/WDS Global Historical Tsunami Database, 2100 BC to Present - NCEI/WDS Global Tsunami Deposits Database - Tsunami Event on Marigrams The Global Historical Tsunami Database provides information on over 2,400 tsunamis from 2100 BC to the present in the the Atlantic, Indian, and Pacific Oceans; and the Mediterranean and Caribbean Seas. The database includes two related files. The first file includes information on the tsunami source such as the date, time, and location of the source event; cause and validity of the source, tsunami magnitude and intensity; maximum water height; the total number of fatalities, injuries, houses destroyed, and houses damaged; and total damage estimate (in U.S. dollars). The second related file contains information on the runups (the locations where tsunami waves were observed by eyewitnesses, reconnaissance surveys, tide gauges, and deep-ocean sensors) such as name, location, arrival time, maximum water height and inundation distance, and socio-economic data (deaths, injuries, damage) for the specific runup location. - NCEI/WDS Global Significant Earthquake Database, 2150 BC to Present The Significant Earthquake Database is a global listing of over 5,700 earthquakes from 2150 BC to the present. A significant earthquake is classified as one that meets at least one of the following criteria: caused deaths, caused moderate damage (approximately $1 million or more), magnitude 7.5 or greater, Modified Mercalli Intensity (MMI) X or greater, or the earthquake generated a tsunami. The database provides information on the date and time of occurrence, latitude and longitude, focal depth, magnitude, maximum MMI intensity, and socio-economic data such as the total number of casualties, injuries, houses destroyed, and houses damaged, and $ dollage damage estimates. References, political geography, and additional comments are also provided for each earthquake. If the earthquake was associated with a tsunami or volcanic eruption, it is flagged and linked to the related tsunami event or significant volcanic eruption. - Global Volcano Locations Database NCEI maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The database includes information on the volcano name, location, elevation, volcano type, date of the last known eruption, and the certainty of Holocene volcanism. - NCEI/WDS Global Significant Volcanic Eruptions Database, 4360 BC to Present The Significant Volcanic Eruptions Database is a global listing of over 600 eruptions from 4360 BC to the present. A significant eruption is classified as one that meets at least one of the following criteria: caused fatalities, caused moderate damage (approximately $1 million or more), Volcanic Explosivity Index (VEI) of 6 or greater, generated a tsunami, or was associated with a significant earthquake. The database provides information on the latitude, longitude, elevation, type of volcano, last known eruption, VEI index, and socio-economic data such as the total number of casualties, injuries, houses destroyed, and houses damaged, and $ dollage damage estimates. References, political geography, and additional comments are also provided for each eruption. If the eruption was associated with a tsunami or significant earthquake, it is flagged and linked to the related database. For a complete list of current and past activity for all volcanoes on the planet active during the last 10,000 years, please see Smithsonian Institution s Global Volcanism Program (GVP). - Wildfires - DMSP Special Projects (nighttime lights, fire detection and power outages) - Geothermal Energy - Storm Events (National Weather Service data) The Storm Events Database contains the records used to create the official NOAA Storm Data publication, documenting: The occurrence of storms and other significant weather phenomena having sufficient intensity to cause loss of life, injuries, significant property damage, and/or disruption to commerce; Rare, unusual, weather phenomena that generate media attention, such as snow flurries in South Florida or the San Diego coastal area; and Other significant meteorological events, such as record maximum or minimum temperatures or precipitation that occur in connection with another event. - Severe Weather Data Inventory The Severe Weather Data Inventory (SWDI) is an integrated database of severe weather records for the United States. The records in SWDI come from a variety of sources in the NCEI archive. SWDI provides the ability to search through all of these data to find records covering a particular time period and geographic region, and to download the results of your search in a variety of formats. The formats currently supported are Shapefile (for GIS), KMZ (for Google Earth), CSV (comma-separated), JSON, and XML.</description>
<size>316889579233</size>
</item><item>
<title>NIH RePORTER 25-04-25</title>
<category>Dataset</category>
<infohash>5cc64ab8d71e5f6c7732818842ea9a744a47bdfc</infohash>
<guid>https://academictorrents.com/details/5cc64ab8d71e5f6c7732818842ea9a744a47bdfc</guid>
<link>https://academictorrents.com/details/5cc64ab8d71e5f6c7732818842ea9a744a47bdfc</link>
<description>RePORTER is the NIH s database of its grants, awards, and publications. &gt; In addition to carrying out its scientific mission, NIH exemplifies and promotes the highest level of public accountability. To that end, the Research Portfolio Online Reporting Tools (RePORT) website provides access to reports, data, and analyses of NIH research activities, including information on NIH expenditures and the results of NIH-supported research. &gt; &gt; One of the tools available on the RePORT website is the RePORTER (RePORT Expenditures and Results) module. RePORTER is an electronic tool that allows users to search a repository of both intramural and extramural NIH-funded research projects and access publications and patents resulting from NIH funding. &gt; &gt; In addition to RePORTER, the RePORT website also contains other tools that provide access to reports and summary statistics on NIH funding and the organizations and people involved in NIH research and training. One of these tools is the NIH Data Book, which summarizes the most commonly asked questions about the NIH budget and extramural programs. Another tool is called Awards by Location, which summarizes NIH awards for a particular fiscal year by the location and organization of the awardees. This upload also contains the [NIH databook](https://report.nih.gov/nihdatabook), exported to xlsx and csv. ## Threat This data might seem very low risk, since it s just uncontroversial data about what the NIH has funded in the past, but the trump administration has moved to clear records from the NSF s records system, so one can expect RePORTER data to follow soon after. Initially set on  watchlist  because no specific threat has been identified. ## Method ### ExPORTER Data was acquired from NIH s ExPORTER site: https://reporter.nih.gov/exporter/ The page does not contain regular links with download URLs, instead the table is dynamically loaded and the links are opaque single-use query parameter blobs. A simple playwright script captures the data, and pandas is used to extrac tthe data dictionaries:    python from pathlib import Path import pandas as pd from playwright.sync_api import sync_playwright from tqdm import trange</description>
<size>3999268864</size>
</item><item>
<title>globalchange.gov PDFs</title>
<category>Dataset</category>
<infohash>89e2a3f760dc3b4ccbdf9b8436ae8efb228b690a</infohash>
<guid>https://academictorrents.com/details/89e2a3f760dc3b4ccbdf9b8436ae8efb228b690a</guid>
<link>https://academictorrents.com/details/89e2a3f760dc3b4ccbdf9b8436ae8efb228b690a</link>
<description>A collection of all PDFs and reports from https://globalchange.gov/reports Includes many climate reports both from globalchange as well as other agencies and organizations like IPCC, etc. As of april 9th, 2025, the USGCRP has had its funding cancelled</description>
<size>3141533696</size>
</item><item>
<title>doi-boem-oil-gas-mapping-gis-data</title>
<category>Dataset</category>
<infohash>2ced33ea7e4980a224982d72ec4221690bd08ed3</infohash>
<guid>https://academictorrents.com/details/2ced33ea7e4980a224982d72ec4221690bd08ed3</guid>
<link>https://academictorrents.com/details/2ced33ea7e4980a224982d72ec4221690bd08ed3</link>
<description>Contains full Maps and GIS data from BOEM site. About: OUR MISSION To manage development of U.S. Outer Continental Shelf (OCS) energy, mineral, and geological resources in an environmentally and economically responsible way. Maps and GIS Data MarineCadastre.gov â This online interactive map viewer has integrated submerged lands information consisting of legal, property ownership (cadastre), physical, biological, ocean uses, and cultural information from multiple agencies in a common reference framework. Users can create, view, and print maps from this free, easy to use viewer, or can directly link these GIS data layers (web map services) into their own GIS applications. Most data are downloadable directly from the data registry.  Additional map mash-ups are available to use or for simple viewing. Other tools such as OceanReports and the Environmental Studies Program Information System - ESPIS can also be found from the MarineCadastre.gov site. GIS Data map GIS Data/Shapefiles â Download GIS data files for BOEM Offshore block grids, boundaries, active leases, wells, pipelines, and more. Atlantic Gulf Pacific Alaska GIS Data Maintained by the Office of Renewable Energy Programs Lease Map Diagrams 	OPD, SOBD, CBD, &amp; Lease Map Diagrams â Official Protraction Diagrams (OPDs) and Lease Maps show the OCS block grids and other boundaries for a  given area. Supplemental Official Block Diagrams (SOBDs) are created for individual blocks which are intersected by offshore boundaries. The zip files may contain both current and historic SOBDs. The older SOBDs are provided for historical reference, but all future activities will be based on the most current SOBDs. Composite Block Diagrams (CBDs) show NAD 27 lease information portrayed on the NAD 83 cadastre, or show other boundaries that have changed over time. Not all OPDs have SOBDs or CBDs associated with them.  Click on the individual Region names: Atlantic Gulf Gulf NAD83 Pacific Alaska BOEM Map Gallery â Direct links to a selection of national, regional, local and special purpose maps which are spread throughout BOEM s web site. Report cover for OCS Mapping Initiatives OCS Boundary Policies and Procedures â This page contains court decisions, treaties, legislation, policies, procedures, and other documents that guide the boundary making process on the Outer Continental Shelf.</description>
<size>1458544588353</size>
</item><item>
<title>noaa-ncei-hurricane-satellite-hursat-avhrr-b1-mw</title>
<category>Dataset</category>
<infohash>0012eeaa81a4df4da723cbb23ff5149934782e49</infohash>
<guid>https://academictorrents.com/details/0012eeaa81a4df4da723cbb23ff5149934782e49</guid>
<link>https://academictorrents.com/details/0012eeaa81a4df4da723cbb23ff5149934782e49</link>
<description>The HURSAT project provides Tropical Cyclone-centric satellite data in gridded netCDF format to create a database of small, portable, and easy to work with storm data. The project began with HURSAT-B1, but has expanded to include data from other satellite sources with different temporal and spatial resolution. Includes three separate HURSAT datasets. AVHRR Data: HURSAT-AVHRR is derived from NOAA s Advanced Very High Resolution Radiometer and balances the weaknesses of the HURSAT-B1 data. It has higher (4km) spatial resolution, and its observations originate from one instrument instead of a variety of instruments in the B1 record. However, the temporal resolution is limited to two to eight observations per day, depending on the satellite configuration. B1 Data: HURSAT-B1 data are derived from International Satellite Cloud Climatology Project (ISCCP) B1 data. The HURSAT-B1 v06 data spans 1978-2015 and provides coverage of global tropical cyclones at 8-km and 3-hourly resolution. Microwave: Passive microwave observations provide significant information content because most clouds are transparent at microwave wavelengths. The HURSAT-Microwave (MW) dataset is constructed using a very similar process to HURSAT-B1. Each time a Defense Meteorological Satellite Program (DMSP) satellite passes over a tropical cyclone, Special Sensing Microwave Imager (SSMI) data are mapped to an equal angle grid (fixed latitude/longitude) centered on the temporally interpolated storm location. HURSAT-MW provides brightness temperatures for all seven SSMI channels. No product retrievals (e.g., rain rate, total column water vapor, etc.) are provided in the data but are possible (e.g., viewing the imagery derived from the data).</description>
<size>358096997529</size>
</item><item>
<title>noaa-ncei-precipitation-persiann</title>
<category>Dataset</category>
<infohash>448dd345e52d3a0b0953b7f2d86f75089ca61a24</infohash>
<guid>https://academictorrents.com/details/448dd345e52d3a0b0953b7f2d86f75089ca61a24</guid>
<link>https://academictorrents.com/details/448dd345e52d3a0b0953b7f2d86f75089ca61a24</link>
<description>The Precipitation - PERSIANN Climate Data Record (CDR) is a daily, quasi-global precipitation product that spans from 1982âpresent. Coverage extends from 60Â° Sâ60Â°N and 0Â°â360Â° longitude at 0.25Â° spatial resolution. This dataset supports climatologists, hydrologists, hydrometeorologists, and hydroclimatologists in various forms of climate research, including extreme event (flood and drought) analysis.</description>
<size>17169821768</size>
</item><item>
<title>MindValley | Calm Mind</title>
<category>Course</category>
<infohash>76a5e15fb4fadffff87a42d534059afb710410cc</infohash>
<guid>https://academictorrents.com/details/76a5e15fb4fadffff87a42d534059afb710410cc</guid>
<link>https://academictorrents.com/details/76a5e15fb4fadffff87a42d534059afb710410cc</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/FkCXvkbk/css-fbshare-sales.png MindValley - Calm Mind Course details Calm Mind - Rewire Your Brain in 7 Days In just 7 days, break free from anxiety, stress, and emotional overwhelm with Dr. Caroline Leaf s proven 5-step Neurocycle™ method. This guided program helps you transform your mental well-being in just 15-20 minutes a day. What You ll Learn: Calm Mind is a powerful brain detox course that helps you reclaim your personal power and emotional peace through neuroscience-based practices. - The 5-Step Neurocycle™ for rewiring negative thoughts - Daily 15-20 min sessions to develop inner calm - Emotional regulation techniques for daily life - Rebuild confidence and resilience from within - Heal past emotional pain and move forward - Break toxic thought loops and self-sabotage - Step into emotional freedom and mental clarity Result By Day 7, you ll experience a more centered, empowered, and emotionally balanced version of yourself. Instructor Dr. Caroline Leaf - cognitive neuroscientist, bestselling author, and global thought leader in mental wellness. General Details: Duration: 2h 22m Updated: 04/2025 Language: English Source: https://www.mindvalley.com/calmmind MPEG-2 | Video: AVC, 1920x1080p | Audio: AAC, 48.000 KHz, 2 Ch | 9 Lectures | 4 PDF</description>
<size>3364796652</size>
</item><item>
<title>ITPro.TV | Nmap</title>
<category>Course</category>
<infohash>5764833f9c21448b7d5cffcace16df4555e6c456</infohash>
<guid>https://academictorrents.com/details/5764833f9c21448b7d5cffcace16df4555e6c456</guid>
<link>https://academictorrents.com/details/5764833f9c21448b7d5cffcace16df4555e6c456</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/Qjht4GF7/2759d.png ITPro.TV - Nmap Course details This course begins with a straightforward guide to installing Nmap, ensuring that you have the software ready to use. Next, we cover the fundamentals of Nmap, providing a solid foundation for beginners. As you progress, you will delve deeper into host discovery, learning various techniques to identify active hosts on your network. Each step is carefully explained to build your confidence and understanding. The journey continues with an in-depth exploration of port states and port scanning. You ll gain insights into different port states and their implications for network security. Detailed videos on port scanning techniques will teach you how to identify open ports, understand service versions, and recognize vulnerabilities. Advanced port scanning methods, such as IP ID Idle Scan and FTP Bounce, are thoroughly covered, offering you sophisticated tools to enhance your scanning capabilities. Throughout the course, practical examples and hands-on exercises ensure that you can apply what you ve learned immediately. By the end, you will possess a comprehensive skill set that enables you to use Nmap effectively for network security and troubleshooting. This course is designed to transform you into a proficient Nmap user, capable of tackling complex network challenges with confidence. What you will learn • Install and configure Nmap on various operating systems • Understand and utilize basic Nmap commands • Perform effective host discovery to identify active devices • Conduct thorough port scans to identify open and vulnerable ports • Apply advanced scanning techniques for deeper network analysis General Details: Duration: 7h 6m 38s Updated: 04/2025 Language: English Subtitle: .SRT Included Source: https://www.acilearning.com/catalog/it/nmap/ MP4 | Video: AVC, 1280x720p | Audio: AAC, 44.100 KHz, 2 Ch | 12 Lectures</description>
<size>2308632466</size>
</item><item>
<title>noaa-ncei-ndbc-hfradar-real-time-vectors</title>
<category>Dataset</category>
<infohash>3b4c08297df0ddab3548f77ead1529e5aeb82734</infohash>
<guid>https://academictorrents.com/details/3b4c08297df0ddab3548f77ead1529e5aeb82734</guid>
<link>https://academictorrents.com/details/3b4c08297df0ddab3548f77ead1529e5aeb82734</link>
<description>This dataset contains near-real-time ocean surface velocities, also known as total vector velocities, derived from high-frequency (HF) radar stations. The velocities are arranged in a horizontal latitude/longitude grid. Measured velocities are indicative of the upper 0.3 - 2.5 meters of the ocean depending on the operating frequency of the HF-radar and the vertical velocity profile of the water column. Data files are in NetCDF format following the CF (Climate and Forecast) Metadata Conventions. Velocities are reported in CF variables surface_eastward_sea_water_velocity and surface_northward_sea_water_velocity. The National Data Buoy Center (NDBC) â in collaboration with the Scripps Institution of Oceanography through June 2025 and thereafter with NOAAâs National Environmental Satellite, Data, and Information Service (NESDIS) â assembles the data from the Integrated Ocean Observing System (IOOS) Surface Currents Programâs HF-Radar National Network and submits the data on a monthly basis to NOAAâs National Centers for Environmental Information (NCEI). Remote sensing of ocean surface velocity from shore-based HF-radar sensors bridges the operational observational gap between point samples obtained from in-situ sampling and synoptic scale relatively low resolution data obtained from satellites by providing continuous mesoscale coverage at relatively high resolution near the coast.</description>
<size>48793819269</size>
</item><item>
<title>doi-bia-open-data</title>
<category>Dataset</category>
<infohash>6641f150cf768fd3c8a29c0ccbc570e6d57eaaef</infohash>
<guid>https://academictorrents.com/details/6641f150cf768fd3c8a29c0ccbc570e6d57eaaef</guid>
<link>https://academictorrents.com/details/6641f150cf768fd3c8a29c0ccbc570e6d57eaaef</link>
<description>Download of all document and dataset entries from https://opendata-1-bia-geospatial.hub.arcgis.com/search About: Bureau of Indian Affairs Open Data Portal Collaborating together to address our common goals â Our Mission The mission of the Branch of Geospatial Support (BOGS) is to assist Tribal governments and Indian Affairs to manage the cultural and natural resources of Indian Country by providing geographic information systems software, training, and technical support.</description>
<size>121772184</size>
</item><item>
<title>doi-bureau-of-indian-education-mirror</title>
<category>Dataset</category>
<infohash>0c3b7c91080b989e2d42c7514cd6c87d4f9cb9fb</infohash>
<guid>https://academictorrents.com/details/0c3b7c91080b989e2d42c7514cd6c87d4f9cb9fb</guid>
<link>https://academictorrents.com/details/0c3b7c91080b989e2d42c7514cd6c87d4f9cb9fb</link>
<description>WACZ file captured by Archiveweb.page for the Bureau of Indian Education site. About: History Three major legislative actions have restructured the Bureau of Indian Affairs about educating American Indians since the Snyder Act of 1921. First, the Indian Reorganization Act of 1934 introduced the teaching of Indian history and culture in BIA schools (until then it had been Federal policy to acculturate and assimilate Indian people by eradicating their tribal cultures through a boarding school system). Second, the Indian Self-Determination and Education Assistance Act of 1975 gave authority to federally recognized tribes to contract with the BIA for the operation of Bureau-funded schools and to determine education programs suitable for their children. The Education Amendments Act of 1978 and further technical amendments provided funds directly to tribally operated schools, empowered Indian school boards, permitted local hiring of teachers and staff, and established a direct line of authority between the Education Director and the AS-IA. The No Child Left Behind Act of 2001  brought additional requirements to the schools by holding them accountable for improving their studentsâ academic performance with the U.S. Department of Education supplemental program funds they receive through the Bureau. Formerly known as the Office of Indian Education Programs, the Bureau of Indian Education was renamed and established on August 29, 2006, to reflect the parallel purpose and organizational structure BIE has in relation to other programs within the Office of the Assistant Secretary-Indian Affairs. The BIE is headed by a Director, who is responsible for the line direction and management of all education functions, including forming policies and procedures, supervising all program activities, and approving the expenditure of funds appropriated for education functions. As stated in Title 25 CFR Part 32.3, BIEâs mission is to provide quality education opportunities from early childhood through life in accordance with a tribeâs needs for cultural and economic well-being, in keeping with the vast diversity of Indian tribes and Alaska Native villages as distinct cultural and governmental entities. Further, the BIE is to manifest consideration of the whole person by considering the individual s spiritual, mental, physical, and cultural aspects within his or her family and tribal or village context. The BIE school system employs thousands of teachers, administrators and support personnel, while many more work in tribal school systems. Currently, there are 183 Bureau-funded elementary and secondary schools on 64 reservations in 23 states, serving approximately 40,000 Indian students. Of these, 55 are BIE-operated and 128 are tribally controlled under BIE contracts or grants. The Bureau also funds or operates off-reservation boarding schools and peripheral dormitories near reservations for public school students. The BIE also serves American Indian and Alaska Native post-secondary students through higher education scholarships and support funding for tribal colleges and universities. The BIE directly operates two post-secondary institutions: the Haskell Indian Nations University in Lawrence, Kansas, and the Southwestern Indian Polytechnic Institute in Albuquerque, New Mexico. Vision The Bureau of Indian Education is the preeminent provider of culturally relevant educational services and supports provided by highly effective educators to students at BIE-funded schools to foster lifelong learning. Mission As stated in Title 25 CFR Part 32.3, BIEâs mission is to provide quality education opportunities from early childhood through life in accordance with a tribeâs needs for cultural and economic well-being, in keeping with the vast diversity of Indian tribes and Alaska Native villages as distinct cultural and governmental entities. Further, the BIE is to manifest consideration of the whole person by taking into account the spiritual, mental, physical, and cultural aspects of the individual within his or her family and tribal or village context. Core Values BIE employees carry out the mission to achieve the vision through guiding organizational principles underpinning how the work will be successfully accomplished. Excellence: The BIE achieves success through continuous self-assessment and improvement. Focus: The BIE is student-centered, with a commitment to addressing the holistic needs of students. Integrity: The BIE maintains high standards of character and professionalism as the foundation upon which the agency is built. Respect: The BIE fosters communities of support through mutual regard and collaboration. Service: The BIE supports students through proactive and responsive teamwork with schools, Tribes, and communities.</description>
<size>419226591</size>
</item><item>
<title>Reddit comments/submissions 2025-03</title>
<category>Dataset</category>
<infohash>830da9df02e91fd50881b26aa33902d2989d4236</infohash>
<guid>https://academictorrents.com/details/830da9df02e91fd50881b26aa33902d2989d4236</guid>
<link>https://academictorrents.com/details/830da9df02e91fd50881b26aa33902d2989d4236</link>
<description>Reddit comments and submissions from 2025-03 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</description>
<size>56795445442</size>
</item><item>
<title>noaa-ncei-world-ocean-atlas-23-figures</title>
<category>Dataset</category>
<infohash>9bcdcb5efbcec15e37d918784618329e487599ac</infohash>
<guid>https://academictorrents.com/details/9bcdcb5efbcec15e37d918784618329e487599ac</guid>
<link>https://academictorrents.com/details/9bcdcb5efbcec15e37d918784618329e487599ac</link>
<description>WOA23 Figures application contains geographic distributions of objectively analyzed fields and statistics at 102 standard depth levels of the World Ocean. Temperature and salinity fields are available on one-degree and quarter-degree latitude-longitude grids and were generated for the following time periods: 1955-1964, 1965-1974, 1975-1984, 1985-1994, 1995-2004, 2005-2014, 2015-2022 and three 30-year  climate normal  periods: 1971-2000, 1981-2010, and 1991-2020. Inorganic nutrients (phosphate, silicate, and nitrate) are presented on one-degree fields for the 1965-2022 time period. Oxygen, Apparent Oxygen Utilization, and Percent Oxygen Saturation are presented on one-degree fields for the 1965-2022 and 1971-2000 time periods. The fields used to generate these climatological maps were computed by objective analysis of all scientifically quality-controlled historical data in the World Ocean Database 2023.</description>
<size>114668783840</size>
</item><item>
<title>MindValley | Smart Money: Your Roadmap to Financial Success</title>
<category>Course</category>
<infohash>11c14837e6d5c11656ce1a39d0a9e8722f1ea5cf</infohash>
<guid>https://academictorrents.com/details/11c14837e6d5c11656ce1a39d0a9e8722f1ea5cf</guid>
<link>https://academictorrents.com/details/11c14837e6d5c11656ce1a39d0a9e8722f1ea5cf</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/VYTCyD71/Mindvalley12.png MindValley - Smart Money: Your Roadmap to Financial Success Course details Embark on your journey to financial success with Jaspreet Singh, where no-nonsense wisdom meets actionable strategies. This Quest offers practical guidance to elevate your financial prowess and build a robust foundation for future wealth. Immerse yourself in insights and purposeful exercises to level up your finances and master strategies for navigating and mitigating financial risks. This isn t just education; it s a practical roadmap to financial empowerment and lasting success. ​ For anyone seeking a richer financial future. Do you find yourself constantly worrying about money? Does the thought of retirement, investments, or even just monthly budgeting fill you with a sense of dread? If you ever thought there must be more straightforward ways to manage your finances, ways that the experts and the wealthy seem to know but aren t sharing, you re not alone. Unfortunately, the majority of us have been left in the dark when it comes to truly understanding money, leading to a cycle of debt, stress, and financial uncertainty for countless individuals. But should you really bear the blame for a system that fails to equip you with crucial skills we all deserve? The truth is, financial literacy is a right, not a privilege. And under the guidance of Jaspreet Singh, the power to change your financial destiny is now firmly in your grasp. Transform Your Financial Landscape In Just Two Weeks  Smart Money: Your Roadmap to Financial Success  is a rapid transformational experience that radically redefines how you interact with money. In just 14 days, dedicating mere minutes each day, you ll dive into a world where managing money, investing wisely, and growing your wealth becomes second nature. Every strategy, tool, and mindset shift you learn is designed to enhance one of what Jaspreet calls The 3 Keys to Financial Fitness, namely - Spend Less - Earn More - Invest like crazy Whether you re overwhelmed by debt, confused about investing, or simply seeking a more secure financial future, Jaspreet s clear, step-by-step approach helps you make intelligent money choices that set you and your family up for life. What You’ll Learn Master the Money Mindset Unlock the same money mindset that has long been the secret of the wealthy: a mindset of abundance that effortlessly attracts and builds wealth, rather than perpetuating scarcity. Step Beyond Financial Fear Confidently step into new opportunities, applying strategies once thought beyond reach to achieve groundbreaking financial success. Income Optimization Turn every paycheck into a growth opportunity. You ve unlocked the secrets of income maximization, stretching your earnings to their fullest potential for a richer life. Strategic Budgeting Master the art of budgeting where every dollar is strategically allocated. Handle unexpected expenses with ease, and watch your wealth grow as you follow a budget that aligns with your financial goals. Investing Made Simple Become a confident investor, turning market fluctuations into opportunities for predictable gains. You re now making smart, informed decisions that steadily build your financial portfolio. Amplify Your Finances Witness the transformation of your financial landscape as you apply ingenious strategies and tap into new streams of passive income, turning modest savings into a significant nest egg. From Debt Management to Debt Freedom Understand the difference between ‘good debt’ and ‘bad debt’ for deep peace of mind and how to never pay a penny of interest on your credit cards. General Details: Duration: 3h 33m Updated: 04/2025 Language: English PDF: Included Source: https://www.mindvalley.com/smartmoney MP4 | Video: AVC, 1920x1080p | Audio: AAC, 48.000 KHz, 2 Ch</description>
<size>4792267308</size>
</item><item>
<title>s3-noaa-cdr-sea-surface-temp-whoi-pds-2025-04-11.tar.gz</title>
<category>Dataset</category>
<infohash>2cb52842743b2633585e9a467a5f3dd543ae8baa</infohash>
<guid>https://academictorrents.com/details/2cb52842743b2633585e9a467a5f3dd543ae8baa</guid>
<link>https://academictorrents.com/details/2cb52842743b2633585e9a467a5f3dd543ae8baa</link>
<description>From the author: NOAA s Climate Data Records (CDRs) are robust, sustainable, and scientifically sound climate records that provide trustworthy information on how, where, and to what extent the land, oceans, atmosphere and ice sheets are changing. These datasets are thoroughly vetted time series measurements with the longevity, consistency, and continuity to assess and measure climate variability and change. NOAA CDRs are vetted using standards established by the National Research Council (NRC). Climate Data Records are created by merging data from surface, atmosphere, and space-based systems across decades. NOAA’s Climate Data Records provides authoritative and traceable long-term climate records. NOAA developed CDRs by applying modern data analysis methods to historical global satellite data. This process can clarify the underlying climate trends within the data and allows researchers and other users to identify economic and scientific value in these records. NCEI maintains and extends CDRs by applying the same methods to present-day and future satellite measurements. Oceanic Climate Data Records are measurements of oceans and seas both surface and subsurface as well as frozen state variables. Documentation: https://www.ncdc.noaa.gov/cdr Archive Details: Created on 2025-04-11 by using awscli to sync the S3 bucket @ s3://noaa-cdr-sea-ice-concentration-pds/ How to Cite: NOAA Oceanic Climate Data Records was accessed on DATE from https://registry.opendata.aws/noaa-cdr-oceanic. Description: Sea Surface Temperature - WHOI Resource type: S3 Bucket Amazon Resource Name (ARN): arn:aws:s3:::noaa-cdr-sea-surface-temp-whoi-pds AWS Region: us-east-1 AWS CLI Access (No AWS account required): aws s3 ls &amp;mdash;no-sign-request s3://noaa-cdr-sea-surface-temp-whoi-pds/ License Details: NOAA data disseminated through NODD are open to the public and can be used as desired. NOAA makes data openly available to ensure maximum use of our data, and to spur and encourage exploration and innovation throughout the industry. NOAA requests attribution for the use or dissemination of unaltered NOAA data. However, it is not permissible to state or imply endorsement by or affiliation with NOAA. If you modify NOAA data, you may not state or imply that it is original, unaltered NOAA data.</description>
<size>166086926325</size>
</item><item>
<title>Intro to 3D Coat | Noob to Cook By Anton Tenitsky</title>
<category>Course</category>
<infohash>0fbbb696044bc1d4e5207acc1d33dcccc6f0358f</infohash>
<guid>https://academictorrents.com/details/0fbbb696044bc1d4e5207acc1d33dcccc6f0358f</guid>
<link>https://academictorrents.com/details/0fbbb696044bc1d4e5207acc1d33dcccc6f0358f</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/Dg49LbMm/3D-Coat.png Intro to 3D Coat - Noob to Cook By Anton Tenitsky Course details The course is designed to guide you through the fundamental features of this 3D sculpting software. I am covering key topics like sculpting, manual and auto-retopology, UV mapping, baking texturing, and rendering. The tutorial also bridges the workflow between 3D Coat and Blender, showing how to set up scenes and create camera animations for professional-quality presentations. The course is made of small-length videos 1-3 minutes per tool. It makes it convenient to navigate and study. Long processes are recorded in timelapses with narration. The 3D Coat sculpt and Blender scene are included in the Project Files. CHAPTERS: 01 - Interface Basics Learn to navigate the 3D Coat interface, customize the workspace, and use essential tools and shortcuts to work efficiently. 02 - Sculpting Explore voxel and surface sculpting techniques, working with brushes, stencils, and symmetry to create detailed models. 03 - Manual Retopology Master manual retopology by creating clean edge loops and topology for an optimized model. 04 - Creating UVs Discover how to mark seams, unwrap models, and optimize UV layouts for clean and distortion-free texturing. 05 - Baking and Texturing Learn to bake textures and apply PBR materials using smart materials and layers for realistic, detailed surfaces. 06 - Blender Scene Set Up Set up a scene in Blender, import models from 3D Coat, apply materials, and adjust lighting and camera for final rendering. 07 - Auto-Retopology and Texturing Automatically generate clean topology with 3D Coat’s auto-retopology tools, apply textures, and create smooth camera animations in Blender for dynamic presentations. Skills you’ll gain - Master 3D sculpting, retopology, UV mapping, and texturing in 3D Coat! General Details: Duration: 8h 8m Updated: 03/2025 Language: English Source: https://tenitsky.gumroad.com/l/intro3dcoat MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>5055401579</size>
</item><item>
<title>s3-noaa-cdr-sea-surface-temp-optimum-interpolation-pds-2025-04-11.tar.gz</title>
<category>Dataset</category>
<infohash>6f31eedcbb115ce392996e85fdf38622456c6413</infohash>
<guid>https://academictorrents.com/details/6f31eedcbb115ce392996e85fdf38622456c6413</guid>
<link>https://academictorrents.com/details/6f31eedcbb115ce392996e85fdf38622456c6413</link>
<description>s3://noaa-cdr-sea-surfFrom the author: NOAA s Climate Data Records (CDRs) are robust, sustainable, and scientifically sound climate records that provide trustworthy information on how, where, and to what extent the land, oceans, atmosphere and ice sheets are changing. These datasets are thoroughly vetted time series measurements with the longevity, consistency, and continuity to assess and measure climate variability and change. NOAA CDRs are vetted using standards established by the National Research Council (NRC). Climate Data Records are created by merging data from surface, atmosphere, and space-based systems across decades. NOAA’s Climate Data Records provides authoritative and traceable long-term climate records. NOAA developed CDRs by applying modern data analysis methods to historical global satellite data. This process can clarify the underlying climate trends within the data and allows researchers and other users to identify economic and scientific value in these records. NCEI maintains and extends CDRs by applying the same methods to present-day and future satellite measurements. Oceanic Climate Data Records are measurements of oceans and seas both surface and subsurface as well as frozen state variables. Documentation: https://www.ncdc.noaa.gov/cdr Archive Details: Created on 2025-04-11 by using awscli to sync the S3 bucket @ s3://noaa-cdr-sea-ice-concentration-pds/ How to Cite: NOAA Oceanic Climate Data Records was accessed on DATE from https://registry.opendata.aws/noaa-cdr-oceanic. Description: Sea Surface Temperature - Optimum Interpolation Resource type: S3 Bucket Amazon Resource Name (ARN):     arn:aws:s3:::noaa-cdr-sea-surface-temp-optimum-interpolation-pds AWS Region: us-east-1 AWS CLI Access (No AWS account required): aws s3 ls &amp;mdash;no-sign-request s3://noaa-cdr-sea-surface-temp-optimum-interpolation-pds/ License Details: NOAA data disseminated through NODD are open to the public and can be used as desired. NOAA makes data openly available to ensure maximum use of our data, and to spur and encourage exploration and innovation throughout the industry. NOAA requests attribution for the use or dissemination of unaltered NOAA data. However, it is not permissible to state or imply endorsement by or affiliation with NOAA. If you modify NOAA data, you may not state or imply that it is original, unaltered NOAA data. ace-temp-optimum-interpolation-pds</description>
<size>57546385101</size>
</item><item>
<title>s3-noaa-cdr-sea-ice-concentration-pds-2025-04-11.tar.gz</title>
<category>Dataset</category>
<infohash>903231d496bcbab71e3f681bed6521085ef68a8c</infohash>
<guid>https://academictorrents.com/details/903231d496bcbab71e3f681bed6521085ef68a8c</guid>
<link>https://academictorrents.com/details/903231d496bcbab71e3f681bed6521085ef68a8c</link>
<description>From the author: NOAA s Climate Data Records (CDRs) are robust, sustainable, and scientifically sound climate records that provide trustworthy information on how, where, and to what extent the land, oceans, atmosphere and ice sheets are changing. These datasets are thoroughly vetted time series measurements with the longevity, consistency, and continuity to assess and measure climate variability and change. NOAA CDRs are vetted using standards established by the National Research Council (NRC). Climate Data Records are created by merging data from surface, atmosphere, and space-based systems across decades. NOAA’s Climate Data Records provides authoritative and traceable long-term climate records. NOAA developed CDRs by applying modern data analysis methods to historical global satellite data. This process can clarify the underlying climate trends within the data and allows researchers and other users to identify economic and scientific value in these records. NCEI maintains and extends CDRs by applying the same methods to present-day and future satellite measurements. Oceanic Climate Data Records are measurements of oceans and seas both surface and subsurface as well as frozen state variables. Documentation: https://www.ncdc.noaa.gov/cdr Archive Details: Created on 2025-04-11 by using awscli to sync the S3 bucket @ s3://noaa-cdr-sea-ice-concentration-pds/ How to Cite: NOAA Oceanic Climate Data Records was accessed on DATE from https://registry.opendata.aws/noaa-cdr-oceanic. Description: Sea Ice Concentration Resource type: S3 Bucket Amazon Resource Name (ARN):  arn:aws:s3:::noaa-cdr-sea-ice-concentration-pds AWS Region: us-east-1 AWS CLI Access (No AWS account required): aws s3 ls &amp;mdash;no-sign-request s3://noaa-cdr-sea-ice-concentration-pds/ License Details: NOAA data disseminated through NODD are open to the public and can be used as desired. NOAA makes data openly available to ensure maximum use of our data, and to spur and encourage exploration and innovation throughout the industry. NOAA requests attribution for the use or dissemination of unaltered NOAA data. However, it is not permissible to state or imply endorsement by or affiliation with NOAA. If you modify NOAA data, you may not state or imply that it is original, unaltered NOAA data.</description>
<size>44349014790</size>
</item><item>
<title>Treesearch</title>
<category>Paper</category>
<infohash>3bcc2b1228b1fdad4d2ec02f080837f0359fcb56</infohash>
<guid>https://academictorrents.com/details/3bcc2b1228b1fdad4d2ec02f080837f0359fcb56</guid>
<link>https://academictorrents.com/details/3bcc2b1228b1fdad4d2ec02f080837f0359fcb56</link>
<description>This is a ~297GB uncompressed backup of the US Forest Service s "Treesearch" archive. It contains journal articles (and some public congressional/executive reports_ authored by Forest Service Research &amp; Development scientists going back to 1902.</description>
<size>319773741493</size>
</item><item>
<title>Udemy - Go The Complete Developer's Guide (Golang)</title>
<category>Course</category>
<infohash>773b405dc903c094b7fa2ce450adad066b277490</infohash>
<guid>https://academictorrents.com/details/773b405dc903c094b7fa2ce450adad066b277490</guid>
<link>https://academictorrents.com/details/773b405dc903c094b7fa2ce450adad066b277490</link>
<description>Go is an open source programming language created by Google.  As one of the fastest growing languages in terms of popularity, its a great time to pick up the basics of Go! This course is designed to get you up and running as fast as possible with Go.  We ll quickly cover the basics, then dive into some of the more advanced features of the language.  Don t be tricked by other courses that only teach you for-loops and if-statements!  This is the only course on Udemy that will teach you how to use the full power of Go s concurrency model and interface type systems. Go is designed to be easy to pick up, but tough to master.  Through multiple projects, quizzes, and assignments, you ll quickly start to master the language s quirks and oddities.  Go is like any other language - you have to write code to learn it!  This course will give you ample opportunities to strike out on your own and start working on your own programs. In this course you will: Understand the basic syntax and control structures of the language Apply Go s concurrency model to build massively parallel systems Grasp the purpose of types, which is especially important if you re coming from a dynamically typed language like Javascript or Ruby Organize code through the use of packages Use the Go runtime to build and compile projects Get insight into critical design decisions in the language Gain a sense of when to use basic language features Go is one of the fastest-growing programming languages released in the last ten years.  Get job-ready with Go today by enrolling now!</description>
<size>5825439715</size>
</item><item>
<title>Coursera | Psychedelic Science And Medicine 2025</title>
<category>Course</category>
<infohash>1322db11e9b442caac35a9cdfffa271d98819b39</infohash>
<guid>https://academictorrents.com/details/1322db11e9b442caac35a9cdfffa271d98819b39</guid>
<link>https://academictorrents.com/details/1322db11e9b442caac35a9cdfffa271d98819b39</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/rKx7grWR/Johns-Hopkins.png Coursera - Psychedelic Science And Medicine 2025 Course details Explore the science and therapeutic potential of psychedelics in this course led by experts from the Johns Hopkins Center for Psychedelic and Consciousness Research. You ll learn about the history of psychedelic use, the neuroscience underlying their effects, and the latest clinical trials evaluating their therapeutic potential. Gain insights into their risks and benefits, ethical considerations like informed consent, and ongoing challenges in the field. This course equips learners to critically assess scientific findings, moving beyond hype to understand the evidence. Whether you re interested in the neurobiological mechanisms of psychedelics, their role in mental health treatment, or broader societal impacts, this course provides a comprehensive and evidence-based foundation. Perfect for students, professionals, or anyone curious about this emerging area of science, the course offers a unique opportunity to engage with cutting-edge research and practical implications for medicine and beyond. There are 4 modules in this course Offered by - Johns Hopkins University General Details: Duration: 5h 51m Updated: 04/2025 Language: English Source: https://www.coursera.org/learn/psychedelic-science-and-medicine MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch | 45 Lectures | 20 PDF, 4 HTML</description>
<size>6999739211</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2019</title>
<category>Dataset</category>
<infohash>1956fb55a853aaf0558a20f75adfcb65154b7c6a</infohash>
<guid>https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a</guid>
<link>https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: (this torrent) - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). Scraped on an annual basis, with the initial upload in March 2025 partially complete. ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. These torrents also have had to make some modifications to the original source structure in order to fit within the 10MB torrent limit on academictorrents ([see issue](https://github.com/academictorrents/academictorrents-docs/issues/46)). The primary contributor to torrent size if the duplication of the file names in hybrid torrents, so the original meca filenames have been replaced with the DOI suffix for the item in the meca. 2019 was a bit of a transitional year for biorxiv s metadata, as the files switch from the "month/bucket/meca" to just "month/meca" format, and the DOIs begin to have the submission date embedded within them towards the end of the year. Individual item metadata is contained within the JATS XML of the meca (a meca is just a zip file, so it can be read without decompressing the whole archive), but some summary metadata is included for indexing purposes: -  doi_map.json : maps the item DOI to the location within the torrent -  license_map.json : maps the license to the meca -  license_counts.json : summary statistics for each license kind -  errors.json : any errors that were encountered while creating the torrent. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>535864279040</size>
</item><item>
<title>13Cubed | Investigating Windows Endpoints</title>
<category>Course</category>
<infohash>1f08204a9c0aba3ae73d9f6174698fbbe7020756</infohash>
<guid>https://academictorrents.com/details/1f08204a9c0aba3ae73d9f6174698fbbe7020756</guid>
<link>https://academictorrents.com/details/1f08204a9c0aba3ae73d9f6174698fbbe7020756</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/ZpBtTkH0/Investigating-Windows-Endpoints.png 13Cubed - Investigating Windows Endpoints Course details Discover the world of Windows forensic investigation through professional, in-depth training crafted from the expertise behind the 13Cubed YouTube channel. This course delivers affordable and comprehensive content, tailored to help newcomers, experienced professionals looking to sharpen their skills, and anyone fascinated by digital forensics. Prerequisites - Just a basic understanding of Windows 10/11, and a willingness to learn! General Details: Duration: 11h+ Updated: 03/2025 Language: English PDFs: Included Source: https://training.13cubed.com/investigating-windows-endpoints MPEG-2 (4K) | Video: AVC, 3840x2160p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>7957251702</size>
</item><item>
<title>Reddit comments/submissions 2025-03</title>
<category>Dataset</category>
<infohash>69d5e046e15c02182430879f50d62b18fe1404fb</infohash>
<guid>https://academictorrents.com/details/69d5e046e15c02182430879f50d62b18fe1404fb</guid>
<link>https://academictorrents.com/details/69d5e046e15c02182430879f50d62b18fe1404fb</link>
<description>Reddit comments and submissions from 2025-03 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>55846048131</size>
</item><item>
<title>ByteGrad - Professional React &amp; Next.js</title>
<category>Course</category>
<infohash>3f04a585bafac8427dd82bef9b2b859908a59e28</infohash>
<guid>https://academictorrents.com/details/3f04a585bafac8427dd82bef9b2b859908a59e28</guid>
<link>https://academictorrents.com/details/3f04a585bafac8427dd82bef9b2b859908a59e28</link>
<description>What you ll learn Crystal-clear explanations will guide you through the world of React &amp; Next.js 01. How to code React &amp; Next.js in 2025 by building realistic projects from scratch and seeing how it all fits together 02. Avoid hundreds of beginner mistakes so the people who have to interact with your code have it easy 03. Deeply master React fundamentals: (custom) hooks, Context API, state management (Zustand / RTK), etc. 04. Deeply master Next.js fundamentals: App Router, server components, server actions, revalidation, SSR &amp; SSG, etc. 05. Critical best practices that every React/Next.js-developer should know (e.g. when NOT to use Next.js caching) 06. Learn how to fetch data in Next.js with the latest App Router the correct way 07. Master TypeScript in React &amp; Next.js by building simple-to-complex components (including Zod!) 08. Master Tailwind in React &amp; Next.js by building modern UIs from scratch with Flexbox, CSS Grid, etc. 09. Advanced patterns and best practices: storing state in URL, server-side pagination / sorting / filtering, etc. 10. Modern React &amp; Next.js ecosystem: Zustand, RHF (forms), Next-Auth, Stripe, Framer Motion, Shadcn-UI, Prisma, Postgres, Vercel, etc.</description>
<size>30703008976</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2018</title>
<category>Dataset</category>
<infohash>1509e322f49fd946ab441aa7b092f53879971d87</infohash>
<guid>https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87</guid>
<link>https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: (this torrent) - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). Scraped on an annual basis, with the initial upload in March 2025 partially complete. These are just the CC and public domain papers for December 2018. Everything before then is collected in a single  Back_Catalogue  directory by batch, and everything after is collected by year. ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here.  All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>34988883968</size>
</item><item>
<title>BioRxiv - CC &amp; Public Domain Catalog - 2025 through 25-03-10</title>
<category>Dataset</category>
<infohash>d70fda6123588f88478e36204b8be9a751f415da</infohash>
<guid>https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da</guid>
<link>https://academictorrents.com/details/d70fda6123588f88478e36204b8be9a751f415da</link>
<description>Part of a set of torrents - Index: https://sciop.net/datasets/biorxiv - Back Catalogue: (in progress) - 2018: https://academictorrents.com/details/1509e322f49fd946ab441aa7b092f53879971d87 - 2019: https://academictorrents.com/details/1956fb55a853aaf0558a20f75adfcb65154b7c6a - 2020: https://academictorrents.com/details/b81c1be4f0b7ec622ec9cbde2551aaf1547dc33c - 2021: https://academictorrents.com/details/c8ab36be273872466f6a391af4e42e6541c8e65a - 2022: https://academictorrents.com/details/d33e08e51ece62509bc72042514a19a66d225bd6 - 2023: https://academictorrents.com/details/4a5dc447e3a3e0b338abaa26517689c5e804c13f - 2024: (in progress) - 2025 through 25-03-10: (this torrent) &amp;mdash;- Full archive of [MECA](https://www.niso.org/standards-committees/meca)-formatted dumps from BioRxiv s [full text S3 endpoint](https://www.biorxiv.org/tdm). Scraped on an annual basis, with the initial upload in March 2025 partially complete. ## Format These torrents are hybrid bittorrent v1/v2 torrents - this facilitates mutation, indexing, and download of individual files. You should use a bittorrent v2 capable client to download (e.g. qbittorrent with libtorrent 2, listed as  qt6 lt20  in the download page). Academictorrents currently does not understand v2 torrent files - **the total size of the torrent listed on academictorrents is thus incorrect.** Hybrid torrents contain [BEP 47](https://www.bittorrent.org/beps/bep_0047.html) padding files to align the v1 pieces so each covers at most one file. A torrent client that understands v2 will *not download these files* since they are just empty placeholders. ## Legality BioRxiv s bulk access page (currently) reads: &gt; The TDM repository is not intended as a source for further redistribution of articles posted on bioRxiv, or their derivatives, nor does it grant others permission to re-host content posted on bioRxiv.  For most articles submitted to bioRxiv, authors retain copyright and reuse rights.  If you build indexing services or tools based on the full text of articles, you must therefore link back to the text hosted at bioRxiv rather than re-host content.  For reuse/redistribution of individual articles or their derivatives, please consult the licensing terms applied by the authors, which are provided in the metadata.  In most cases, this will require you to contact the copyright holder in advance to obtain permission. It is true that *authors determine the copyright status of their work* but is not necessarily true that *"in most cases, this will require you to contact the copyright holder in advance to obtain permission."* The majority of work published by BioRxiv is licensed under some variant of [Creative Commons](https://creativecommons.org/) license that expressly permits redistribution. We respect the author s intent by redistributing all the CC and public domain works free of charge, with attribution, here. All works licensed under restrictive licenses that prohibit redistribution have been removed from the dataset and are not present in the torrent. This work is listed on academictorrents as CC BY-NC-ND 4.0, the most restrictive of the licenses found in the dataset, but the license for each work is provided in a  licenses_map.json  within the torrent See https://sciop.net/datasets/biorxiv for further details about the creation of these torrents</description>
<size>219571814400</size>
</item><item>
<title>noaa-ncei-ndbc-co-ops</title>
<category>Dataset</category>
<infohash>0a6b2b7865b00df61473e7baf23440902ceb186b</infohash>
<guid>https://academictorrents.com/details/0a6b2b7865b00df61473e7baf23440902ceb186b</guid>
<link>https://academictorrents.com/details/0a6b2b7865b00df61473e7baf23440902ceb186b</link>
<description>The National Water Level Observation Network (NWLON) is a network of long-term water level stations operated and maintained by the NOAA Center for Operational Oceanographic Products and Services (CO-OPS). NWLON stations are located on shore-based platforms, and primarily collect real-time water level measurements. Almost 25 CO-OPS Physical Oceanographic Real-Time Systems (PORTS) comprise a group of water level stations in and around U.S. ports and harbors. Most of the NWLON and PORTS stations also collect real-time meteorological data. Data parameters include barometric pressure, wind direction, speed and gust, air temperature, and water temperature.</description>
<size>1771495840</size>
</item><item>
<title>enwiki-20250401-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>8400b9d0106f7117b86103225d3ac9e7c25f0ed8</infohash>
<guid>https://academictorrents.com/details/8400b9d0106f7117b86103225d3ac9e7c25f0ed8</guid>
<link>https://academictorrents.com/details/8400b9d0106f7117b86103225d3ac9e7c25f0ed8</link>
<description>English Wikipedia Multistream 2025-04-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24872922961</size>
</item><item>
<title>Dickie Bush | Full Stack Writer</title>
<category>Course</category>
<infohash>04c1ddffa3ecac00cf4ace4a5359053531b433df</infohash>
<guid>https://academictorrents.com/details/04c1ddffa3ecac00cf4ace4a5359053531b433df</guid>
<link>https://academictorrents.com/details/04c1ddffa3ecac00cf4ace4a5359053531b433df</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/GvWHd1BZ/Full-Stack-Writer.png Dickie Bush - Full Stack Writer Course details WHAT YOU GET? Full Stack Writer is a bundle of 8 Mini Courses, each containing both text and video lessons, which are hosted in our 7- Figure Writer Skool group. Mini-Course 1 – Viral Writing: Inside our Viral Writing Mini-Course, you´ll find everything you need to know to get strarted writing on-line… Including how to test different niches, how to spot “winning” topics, how to format your content for reader engagement, and how to write about ANY topic to maximize virality (without “dumbing” the content down or alienating your target readers). - A proven – 30 day Content Strategy and Posting Schedule - Our Framework For Finding Your 1st Niche In Record Time - A Crash Course On How To Write Viral Content On X, LinkedIn, and Medium - And dozens of other frameworks, templates, and resources to help you start writing online and build an audience of loyal readers Mini-Course 2 – Newsletter Writing: Inside our Newsletter Writing Mini-Course, you´ll unlock our framework for ideating, naming, and launching your first newsletter. But that´s not it! You´ll also get access to our proven templates – which you can use to accelerate the writing process, impress readers, and differentiate your newsletter from “everyone else”. - A Crash Course On Free vs. Paid Newsletters - Our “Viral Loop STrategy To Build Your Email List On Autopilot - And 4 Newsletter Templates Anyone Can Use To Engage And Captive Readers Mini-Course 3 – Offer Creation: With Our Offer Creation Mini-Course, you´ll master the subtle art of framing “what you do” as the vehicle to unlock some sort of tangible benefit for the customer. This is a masterclass in how to position yourself, and your products and services, in a way that stops customers dead in their tracks, immediately understand what you´re proposing, and allow you to charge a premium. - How to Choose and Name Your Superniche Digital Product - Implementing Our Irresistible Offer Script For Your Digital Product Or Service - The Single Biggest Pricing Mistake You Should Avoid (And How To Charge A Premium) - And dozens of other frameworks, templates, and resources to help you charge more, makes sales with ease, and 10x your writing income. Mini-Course 4 – Landing Page Copywriting: In our Landing Page Copywriting Mini-Course, you´ll get our entire framework and template for writing landing pages that convert custumers on autopilot. You´ll also get instant access to: - A crash Course on Educational Copywriting - Our 7-Figure, 10-Step Landing Page Template - 3 Tips To Keep Your Refund Rate To a Minimum Mini-Course 5 – Sales E-mail Marketing: You´ll get access to our never-before-released 7-Day Countdown Sequence that we run before every cohort of Ship 30 for 30. These 7 emails (along with templates you can use for your own products and services) have generated over $3,000,000 in Ship 30 for 30 sales – and we reuse them over and agin (because they are good) In addition to that, we´ll also walk you through: - Why Email Marketing Is Critical In Every Digital Writer´s Bussiness - How to Write Irresistible Subject Lines To Get Your Audience To Open Your Emails - The Single Biggest Mistake Keeping Most Digital Writers and Creators Broke (And How To Avoid it) Mini-Course 6 – Al Prompt Writing: We´ll give you a step-by-step walkthrough of how we write prompts (for ChatGPT, but also Bard and Claude) that 10x your efficiencies as Digital Writers and Digital Entrepreneurs – along with some of our favorites from our Write With Al archives, incuding: - The 2-Year Test: Turn ChatGPT Into Your Personal Idea Generation Coach - How To Train ChatGPT To Write Targeted Sales Copy To Make An Irresistible Offer With The 5 Stages Of Customer Awareness - Pinpoint Writing With Al: How To Go From Idea To Newsletter In 11 Minutes Mini-Course 7 – Ghostwriting: You´ll get a complete crash course on ghostwriting As A Career, along with our “Free Work Pitch” strategy for finding and pitching your very first ghostwriting client. This is the exact approach Cole used to build the first ghostwriting agency – scaling to 23 full-time employees, 80+ concurrent clients, and over $2,000,000 in annual revenue. In addition to that, we´ll also walk you through: - The Biggest Difference Between Freelance Writing and Ghostwriting - A 5-Step Framework For Ghostwriting Anything For Anyone - The 3 Most Lucrative Ghostwriting Services You Can Offer Mini-Course 8 – 7 Figure Writer Roadmap: Lastly, we´ll share with you our stories hitting 6 and then 7 figures in revenue as Digital Writers – what we learned, what we would do differently if we were starting over agin , and the exact order we would recommend you take in order to fast-track your success (without making all the mistakes we did!). General Details: Duration: 5h 27m 03s Updated: 02/2025 Language: English PDF content: Included Source: https://www.dickiebush.com/writing MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch | 22 Lectures | 81+ PDF Files</description>
<size>6864976927</size>
</item><item>
<title>National Integrated Drought Information System (NIDIS) - GCP Dump - 25-04-03</title>
<category>Dataset</category>
<infohash>3514cbb4314afb0e325ad1f91c48e1674d23a728</infohash>
<guid>https://academictorrents.com/details/3514cbb4314afb0e325ad1f91c48e1674d23a728</guid>
<link>https://academictorrents.com/details/3514cbb4314afb0e325ad1f91c48e1674d23a728</link>
<description>## Source Description Drought affects every sector of the national economy, costing U.S. tax-payers billions of dollars in damages. It impacts urban and rural communities, the agriculture industry, water and electric utilities, public health, transportation, jobs and more. In 2006, Congress passed the National Integrated Drought Information System (NIDIS) Act of 2006, which directs NIDIS to develop and “provide a national drought early warning information system.” NIDIS was reauthorized in 2014 and 2019. Its mission is to help the nation proactively manage drought risks and impacts and improve long-term drought resilience. To fulfill this mission, NIDIS studies and addresses the impacts of drought by collecting reliable data, communicating relevant information, and developing innovative tools and resources for public and management use. For more information about NIDIS, visit the U.S. Drought Portal. ## This Dataset NIDIS is an aggregation point for a lot of drought-related data that might otherwise be better captured in dedicated datasets. This dataset is a collection of any fragments that are uniquely held, collected, or curated within NIDIS, starting with a dump of the [google cloud bucket](https://console.cloud.google.com/storage/browser/noaa-nidis-drought-gov-data). The google cloud bucket organized datasets by format rather than subject, so e.g. it collects all  cog -format data by month. It also contains a directory  current-conditions  with all the current time interval s derivative data (usually daily). Since that data is derived and recreated every day, it isn t all that useful in archival form, so ** current-conditions  is omitted from the upload.** The remainder of the bucket is present, with each leaf subdirectory compressed to minimize the number of files and optimize the chunking in the torrent. The leaf-directory archives were created like this:    shell find ./datasets -type d -links 2 | xargs -I  tar -cz -I  xz -2 -T0  .tar.xz  \; find ./research/nasa-usdm-importance-by-indicator/input-data/percentiles -type d -links 2 | \ xargs -I  tar -cz -I  xz -2 -T0  .tar.xz  \;     ## See Also - original upload: https://sciop.net/datasets/ncei-noaa-nidis - noaa-webrips/drought-gov: https://sciop.net/datasets/noaa-webrips/drought-gov</description>
<size>828190490624</size>
</item><item>
<title>SAMHSA Library Publication Archive on 2025-04-03</title>
<category>Dataset</category>
<infohash>05a6c2c4f1c1620214d251ede6e43bf370e38c82</infohash>
<guid>https://academictorrents.com/details/05a6c2c4f1c1620214d251ede6e43bf370e38c82</guid>
<link>https://academictorrents.com/details/05a6c2c4f1c1620214d251ede6e43bf370e38c82</link>
<description>This is an archive of library.samhsa.gov copied on 2025-04-03. This archive is sourced from both an old list of publications as well as a list of publications on archive date. As a result, some things are duplicated. This archive also includes several assets scrubbed between Jan 11, 2025 and April 3rd, 2025 when it was downloaded. A .json and a .csv lists out all the assets in the archive.</description>
<size>3324391958</size>
</item><item>
<title>OrhanErgun | CSA Using ChatGPT</title>
<category>Course</category>
<infohash>7aadec87e236760620b1e01477ef2c4e5be03083</infohash>
<guid>https://academictorrents.com/details/7aadec87e236760620b1e01477ef2c4e5be03083</guid>
<link>https://academictorrents.com/details/7aadec87e236760620b1e01477ef2c4e5be03083</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/WN7xt3Mx/CSA.png OrhanErgun.net - CSA using ChatGPT Course details In the digital age, cybersecurity has become a critical concern for all organizations. The role of a cybersecurity specialist is both demanding and rewarding, requiring a comprehensive understanding of various aspects, including creating information security policies, disaster recovery planning, risk assessment, compliance implementation, operational activities, incident management, and technical control implementation. Our course, "Mastering Cybersecurity with ChatGPT: From Basics to Specialist," has been designed to provide a solid foundation for those seeking to embark on a rewarding journey in the realm of cybersecurity. Whether you are a beginner with basic cybersecurity knowledge or someone looking to sharpen your skills, this course is an ideal platform to elevate your understanding and expertise. In this course, we utilize the power of ChatGPT, an advanced AI model developed by OpenAI, to guide you through the various facets of cybersecurity. The use of AI in training helps simplify complex concepts and provides practical, hands-on experience in managing cybersecurity tasks. By the end of the course, you will have learned to: 1. Develop and implement robust information security policies. 2. Design effective disaster recovery plans. 3. Conduct comprehensive risk assessments. 4. Ensure compliance with relevant standards. 5. Manage and respond to security incidents. 6. Implement technical controls effectively. We provide a blend of theoretical understanding and practical exercises to ensure you can apply your knowledge in real-world scenarios. This course offers continuous assessment and feedback, allowing you to track your progress and understanding of the material. By harnessing the power of ChatGPT for cybersecurity, you can enter the workforce confidently, ready to tackle the challenges that lie ahead in your career as a cybersecurity specialist. Enroll in "Mastering Cybersecurity with ChatGPT: From Basics to Specialist" today and take the first step towards a rewarding and successful career in cybersecurity. Enrolling in this course gives you exclusive access to our vibrant study group, where you can engage in enriching technical discussions, collaborate on labs, and get answers to your questions from peers and experts. This collaborative environment sets us apart from other training providers, who often offer solitary, independent study options. By joining our study group, you ll enhance your learning experience through collective problem-solving, hands-on lab work, shared insights, and a supportive community. Elevate your learning journey with us and thrive in a network of like-minded learners! Skills you’ll gain - Study Group Participation - Access the Content Anywhere, Anytime - Certificate of Completion About Instructor: Dr. Mohamed Atef is a distinguished Cybersecurity Consultant and Certified Instructor, boasting an impressive career spanning over two decades of hands-on experience orchestrating ... General Details: Duration: 5h 51m 01s Updated: Mon, 04-Nov-2024 Language: English Source: https://orhanergun.net/courses/csa-using-chatgpt MP4 | Video: AVC, 1280x720p | Audio: AAC, 48.000 KHz, 2 Ch</description>
<size>1471130756</size>
</item><item>
<title>Frontend Masters | Build a Fullstack App with Vanilla JS and Go</title>
<category>Course</category>
<infohash>ae47314c9ee9b955f1b45dc8fdc7a1229125cc3c</infohash>
<guid>https://academictorrents.com/details/ae47314c9ee9b955f1b45dc8fdc7a1229125cc3c</guid>
<link>https://academictorrents.com/details/ae47314c9ee9b955f1b45dc8fdc7a1229125cc3c</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/xK2TP4wc/Build-a-Fullstack.png Frontend Masters - Build a Fullstack App with Vanilla JS and Go Course details Build a Full-Stack Real-World Web Application from Scratch In this three-day workshop, you will build a real-world, fully functional web application from scratch using Vanilla JavaScript for the frontend and Go for the backend. You’ll follow industry best practices for structuring a full-stack project, working with APIs, handling authentication, managing state, and securing data. By the end of the workshop, you’ll have a complete, production-ready application ready to deploy and scale. Key Takeaways By participating along with us in the workshop, you ll learn: - Develop a real-world full-stack application from scratch - Master best practices for frontend and backend development - Learn how to create and consume APIs with Go and JavaScript without a framework - Implement authentication, state management, and data sync - Deploy a fully functional, production-ready application Is This Workshop for Me? This workshop is for developers who want to build and deploy a real-world full-stack web application without relying on heavy frameworks. It’s ideal for frontend developers looking to expand into backend development with Go, and for backend developers wanting to improve their JavaScript skills with a real-world project. Any Prerequisites? - Basic JavaScript, HTML, and CSS experience - Familiarity with JavaScript and DOM APIs - Understanding of basic Go backend development - Basic SQL and database concepts General Details: Duration: 1h+ Updated: 04/2025 Language: English Source: https://frontendmasters.com/workshops/vanilla-js-go/ MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>2855487774</size>
</item><item>
<title>drought.gov zim/warc snapshot 2025-04-03</title>
<category>Dataset</category>
<infohash>0a246d40b8f8293583f954a72ea58980edd11ef4</infohash>
<guid>https://academictorrents.com/details/0a246d40b8f8293583f954a72ea58980edd11ef4</guid>
<link>https://academictorrents.com/details/0a246d40b8f8293583f954a72ea58980edd11ef4</link>
<description>Archive of drought.gov on word that NOAA s web infra will be taking a large hit in the next few days. part of https://sciop.net/datasets/noaa-webrips</description>
<size>9143582720</size>
</item><item>
<title>Stack Exchange Data Dump (2025-03-31)</title>
<category>Dataset</category>
<infohash>128a4d5b12d7738f9ab63e86c5b441c221fa8668</infohash>
<guid>https://academictorrents.com/details/128a4d5b12d7738f9ab63e86c5b441c221fa8668</guid>
<link>https://academictorrents.com/details/128a4d5b12d7738f9ab63e86c5b441c221fa8668</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2025-03-31. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has also been archived at https://archive.org/details/stackexchange_20250331</description>
<size>97143704928</size>
</item><item>
<title>mean-layer-temperature-noaa-rss-uah-ucar</title>
<category>Dataset</category>
<infohash>da8a24f2de22664ac4aa906b761e9fb6e6ccc848</infohash>
<guid>https://academictorrents.com/details/da8a24f2de22664ac4aa906b761e9fb6e6ccc848</guid>
<link>https://academictorrents.com/details/da8a24f2de22664ac4aa906b761e9fb6e6ccc848</link>
<description>Contains 5 separate mean layer temperature datasets from NOAA NCEI: Mean Layer Temperature - NOAA CDR Configuration Item ID: 01B-25 The Mean Layer Temperature â NOAA Climate Data Record (CDR) provides long-term temperature measurements for a thick layer of the upper atmosphere. Input data comes from Mean Layer Temperature (MLT) level-1c swath radiances taken by the Microwave Sounding Unit (MSU), the Advanced Microwave Sounder Unit-A (AMSU-A) and the Advanced Technology Microwave Sounder (ATMS) aboard NOAA, NASA, and EUMETSAT satellites. The CDR was calibrated using the NOAA STAR Integrated Microwave Inter-Calibration Approach (IMICA) to remove calibration drifting errors. The final dataset incorporates inter-satellite bias corrections to ensure consistency, and adjustments that account for diurnal drift effect, differences between viewing angles, and channel frequency differences between sensors. The final record is a monthly global gridded dataset with 2.5Â°x2.5Â° Latitude/Longitude resolution, and extends from 1978âpresent. Mean Layer Temperature - RSS CDR Configuration Item ID : 01B-13 The Mean Layer Temperature - Remote Sensing Systems (RSS) Climate Data Record (CDR) integrates thermal emission from oxygen molecules measured at different frequencies from microwave sounders. These measurements provide a crucial element for long-term monitoring of atmospheric temperature, particularly in regions that lack high-quality radiosonde measurements. The final record provides global average and global anomaly maps at 2.5Â°x2.5Â° resolution, and extends from 1978â2023. Mean Layer Temperature - UAH CDR Configuration Item ID: 01B-10 The Mean Layer Temperature - University of Alabama Huntsville (UAH) Climate Data Record (CDR) provides monthly gridded data from microwave sensors onboard operational polar orbiting satellites. These satellites measure temperature anomalies of the atmospheric layer from the surface to roughly 10km, centered in the Lower Troposphere. The final CDR has been calibrated to include adjustments for spacecraft drift, orbital decay, instrument calibration and instrument biases when merging data from the various spacecraft. Mean Layer Temperatures centered in the mid-troposphere (0-15km) and in the lower stratosphere (10-25km) extend this dataset. The final record includes global monthly gridded temperature anomalies on a global 2.5Â°x2.5Â° grid extending from 1978â2023. Mean Layer Temperature - UCAR (Lower Stratosphere) CDR Configuration Item ID: 01B-07 The Lower Stratosphere Mean Layer Temperatures (MLT) Climate Data Record (CDR) shows the long-term variation of Advanced Microwave Sounder Unit (AMSU) channel 9 brightness and Microwave Sounding Unit (MSU) channel 4 brightness measurements that peak in the Lower Stratosphere. These instrument signals are necessary to produce the monthly MLT of the Lower Stratosphere. Data (level-1b) from AMSU/MSU on board multiple satellites were calibrated using coincident Global Positioning System (GPS) Radio Occultation (RO) temperature profile measurements from 2001âpresent. The calibrated MSU/AMSU data from 2001â2014 serve as reference data to calibrate other overlapped MSU/AMSU data from 1980â2001. The final record has global coverage, and extends from 1986â2019. Mean Layer Temperature - UCAR (Upper Troposphere and Lower Stratosphere) CDR Configuration Item ID: 01B-14 The Upper Troposphere and Lower Stratosphere Mean Layer Temperature (MLT) Climate Data Record (CDR) provides calibrated MLT data spanning both atmospheric regions. Input comes from the long-term variation of Brightness Temperature (BT) measurements from Advanced Microwave Sounder Unit (AMSU) channel 7 and Microwave Sounding Unit (MSU) channel 3 that peak in the upper troposphere and lower stratosphere. These peaks provide the necessary signal to produce the monthly MLT for both atmospheric regions. The AMSU Data (level-1b) is calibrated with high-quality radiosonde observations identified by coincident Global Positioning System (GPS) Radio Occultation (RO) temperature profile measurements. The calibrated MSU/AMSU data in the period of 2001-2014 serve as reference data to calibrate other overlapped MSU/AMSU data from 1980-2001. The final record has global coverage, and extends from 1986â2019.</description>
<size>3743530777</size>
</item><item>
<title>Gumroad | Python States For Houdini TDs</title>
<category>Course</category>
<infohash>b6dd4136e9f6cd00f223d99e6a7db62c897e6bf4</infohash>
<guid>https://academictorrents.com/details/b6dd4136e9f6cd00f223d99e6a7db62c897e6bf4</guid>
<link>https://academictorrents.com/details/b6dd4136e9f6cd00f223d99e6a7db62c897e6bf4</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/wZVXmN31/Python-stats.png Gumroad - Python States for Houdini TDs Course details Master Python States for Houdini to create powerful, interactive procedural tools! This course is designed for Houdini Technical Directors (TDs) who want to dive deep into Python states and understand how to use them for more interactive procedural tools. Paul Ambrosiussen will walk you through the core concepts, examples, and features of Python states in Houdini, and show how to harness their full potential. The course covers the following major topics: - Creating and Installing: Learn how to create different types of Python states and install them. - UI Event Handlers: Understand lifecycle event callbacks and how to integrate input devices (mouse, keyboard, etc.). - Context Menus: Learn how to create custom right-click menus to enhance your users  interaction. - Selections: Discover how to bind custom geometry and object selectors. - Guide Geometry: Build drawables and enhance geometry creation using Python with SOP verbs. - Handles: Bind static and dynamic handles to parameters and update them in real-time. - State Parameters: Learn to configure state interactions with your tools using state parameters. - Info Panels: Create HUDs (Heads-Up Displays) to streamline user experience in Houdini. - Miscellaneous Tips: Implement advanced undo, work with Invoke SOP, and other essential Python tips. Practical Examples: - The course will guide you through real-world examples of Python states being built from scratch, including a dedicated section for ongoing examples based on student requests. Additional Course Features: - Slides &amp; Recordings: Educational slides, tips, and video recordings for every chapter. - Google Drive Access: Access HDAs, Python files, and recordings. - Discord Community: Connect with a community of like-minded peers for support and feedback. Requirements: - Basic knowledge of Python - Basic experience with Houdini (Houdini 19.0+ required, Apprentice version works fine!) - SideFXLabs installed - IDE (e.g., Sublime Text, Visual Studio, etc.) Who is this course for? - Houdini users and TDs looking to dive deeper into Python scripting - Anyone interested in making their procedural tools more interactive and powerful - Artists looking to improve their understanding of Python states and custom workflows Pro Tip: Python states can open up a whole new world of customization for your Houdini tools, giving you more flexibility and control over your projects! ???? General Details: Instructor: Paul Ambrosiussen Duration: 7h 15m 50s Updated: 03/2025 Language: English Subtitle: English .SRT Source: https://ambrosiussen.gumroad.com/l/pythonstatesforhoudini MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>1424377959</size>
</item><item>
<title>Smithsonian Open Access Collections Metadata</title>
<category>Dataset</category>
<infohash>4ba681158876d839973ba99314b7f28ff356bbf1</infohash>
<guid>https://academictorrents.com/details/4ba681158876d839973ba99314b7f28ff356bbf1</guid>
<link>https://academictorrents.com/details/4ba681158876d839973ba99314b7f28ff356bbf1</link>
<description>NOTE: contents decompress to ~44GB! From the AWS dataset web page: &gt; The Smithsonian’s mission is the "increase and diffusion of knowledge" and has been collecting since 1846. The Smithsonian, through its efforts to digitize its multidisciplinary collections, has created millions of digital assets and related metadata describing the collection objects. On February 25th, 2020, the Smithsonian released over 2.8 million CC0 interdisciplinary 2-D and 3-D images, related metadata, and additionally, research data from researches across the Smithsonian. The 2.8 million "open access" collections are a subset of the Smithsonian’s 155 million objects, 2.1 million library volumes and 156,000 cubic feet of archival collections held in 19 museums, 9 research centers, libraries, archives and the National Zoo. Digitization of collections is ongoing. This archive contains the metadata for all Open Access items, as of 2025-03-29 (now substantially more than 2.8 million items). Records are in JSON format, split by institution; the record format details are documented at the last link below. * Smithsonian Open Access page: https://www.si.edu/openaccess * AWS Open Data page: https://registry.opendata.aws/smithsonian-open-access/ * Enterprise Data Access Network documentation: https://edan.si.edu/openaccess/docs/</description>
<size>3691548384</size>
</item><item>
<title>SkillShare | Moving From Blender to Maya: Learn a Layered Animation Workflow</title>
<category>Course</category>
<infohash>ced13abbba47c65fb418a7efeb5d918f51910e73</infohash>
<guid>https://academictorrents.com/details/ced13abbba47c65fb418a7efeb5d918f51910e73</guid>
<link>https://academictorrents.com/details/ced13abbba47c65fb418a7efeb5d918f51910e73</link>
<description>SkillShare - Moving From Blender to Maya: Learn a Layered Animation Workflow About This Class Use layered animation to speed up your workflow, animate more efficiently, and build refined 3D animations in Maya. When Sir Wade Neistadt first started animating in 3D, he wasn’t sure where to start. Now almost a decade into the industry, Sir Wade has built a career in 3D animation as a freelance animator, content creator, and educator. With over 230K YouTube subscribers and 3D animation collaborations with brands like Adobe and LG, he has helped thousands of aspiring and professional animators find their place in the world in 3D animation. Now, Sir Wade created this series of four classes as the resource he wished he had when he was learning 3D animation. In this class, Sir Wade will show you how you can use a layered animation workflow to speed up your 3D animation process and have more control over character movement. No matter if you’ve been using a pose-to-pose workflow for years and are curious about layered animation or have no animation experience, Sir Wade will show you how to create a short animation in Maya using layered animation techniques. With Sir Wade by your side, you’ll: - Explore Maya’s interface and some of its most powerful features - Discover the differences between an animation layer system, a layered workflow, and a pose-to-pose workflow - Animate a complex dive roll using a layered animation workflow - Refine your animation using a geometry-blocking workflow Plus, Sir Wade created a downloadable handout filled with tips and tricks for getting started in Maya, including some of his favorite hotkeys, which you can find in the class resources. Whether you’ve been animating in a different software and want to explore Maya for the first time or you just started animating in 3D and want to know the benefits of animating in a layered workflow instead of pose to pose, you’ll leave this class ready to use Maya for other animation projects and with the confidence to animate with both pose to pose and a layered workflow. While you do not need animation experience to take this class, some experience with 3D animation software will be helpful. You’ll need a computer, Maya and a character to animate. To continue learning about 3D animation, explore Sir Wade’s full 3D animation learning path. Skills you’ll gain - Autodesk Maya - Computer - Animation &amp; 3D - Motion &amp; Animation - 3D Animation General Details: Duration: 1h 04m 50s Updated: 03/2025 Language: English Source: https://www.skillshare.com/en/classes/moving-from-blender-to-maya-learn-a-layered-animation-workflow/992930012 MP4 | Video: AVC, 1980x1080p | Audio: AAC, 48.000 KHz, 2 Ch</description>
<size>429731659</size>
</item><item>
<title>Dometrain | From Zero to Hero: Authentication &amp; Authorization in Blazor</title>
<category>Course</category>
<infohash>a96f452591b884c72061e12bf101e4074cf49193</infohash>
<guid>https://academictorrents.com/details/a96f452591b884c72061e12bf101e4074cf49193</guid>
<link>https://academictorrents.com/details/a96f452591b884c72061e12bf101e4074cf49193</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ Dometrain - From Zero to Hero: Authentication &amp; Authorization in Blazor About This Class Learn how to implement authentication and authorization in Blazor One of the hardest things about Blazor, by far, is implementing authentication and authorization. It s all fine when you re developing simple demo apps, but when you need a production-grade Blazor application, documentation and publically available resources are either subpar or non-existent. This is why Blazor expert Jimmy Engström will spend the next four and a half hours explaining authorization and authentication in Blazor and also implementing it with practical examples and multiple identity providers. From claims, policies and roles to Auth0 and EntraID, this course will teach you how to implement auth in Blazor properly. General Details: Duration: 4h 20m 30s Updated: 03/2025 Language: English Source: https://dometrain.com/course/from-zero-to-hero-authentication-authorization-in-blazor/ MP4 | Video: AVC, 1980x108p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>992692638</size>
</item><item>
<title>NIH Pubchem RDF Archive 2025-02-09</title>
<category>Dataset</category>
<infohash>c95b5317b592dceedf66b4fceb071c1404ed2adb</infohash>
<guid>https://academictorrents.com/details/c95b5317b592dceedf66b4fceb071c1404ed2adb</guid>
<link>https://academictorrents.com/details/c95b5317b592dceedf66b4fceb071c1404ed2adb</link>
<description>This is a single folder as part of a complete archive of Pubchem on ftp://ftp.ncbi.nlm.nih.gov/pubchem/. Downloads were completed over the course of a few days around 2025-02-09. Source: National Library of Medicine.</description>
<size>134468802560</size>
</item><item>
<title>NIH Pubchem Compond Archive 2025-02-09</title>
<category>Dataset</category>
<infohash>60b9750b338007f98c4f53f878cde952c6708034</infohash>
<guid>https://academictorrents.com/details/60b9750b338007f98c4f53f878cde952c6708034</guid>
<link>https://academictorrents.com/details/60b9750b338007f98c4f53f878cde952c6708034</link>
<description>This is a single folder as part of a complete archive of Pubchem on ftp://ftp.ncbi.nlm.nih.gov/pubchem/. Downloads were completed over the course of a few days around 2025-02-09. Source: National Library of Medicine.</description>
<size>1864916582400</size>
</item><item>
<title>US Dept. of Education National Center for Homeless Education Statistics</title>
<category>Dataset</category>
<infohash>ed850f8818acdd85c6da7051d1976fac5f4b2d30</infohash>
<guid>https://academictorrents.com/details/ed850f8818acdd85c6da7051d1976fac5f4b2d30</guid>
<link>https://academictorrents.com/details/ed850f8818acdd85c6da7051d1976fac5f4b2d30</link>
<description>This covers all downloadable and backing information available at the NCHE [data page](https://profiles.nche.seiservices.com/Default.aspx).  This includes PDF forms (profiles) breaking out statistics by state and year up to SY  16- 17, and summaries by state combining SY  19- 20 through  21- 22 (the latter are web page snapshots).  The archive also includes a national fiscal summary, and a PDF snapshot of the web page summarizing national statistics. Finally, the archive bundles the HTML of each state s web page (redundant with PDF snapshots), and the script used for downloading the pages and original PDF files in bulk. As data published directly by a US Government department, this is in the public domain. NCHE Website: https://nche.ed.gov/</description>
<size>31027376</size>
</item><item>
<title>Bioassay.tar</title>
<category>Dataset</category>
<infohash>0987d6a5919861fd42c851ff763160fb1d016a95</infohash>
<guid>https://academictorrents.com/details/0987d6a5919861fd42c851ff763160fb1d016a95</guid>
<link>https://academictorrents.com/details/0987d6a5919861fd42c851ff763160fb1d016a95</link>
<description>This is a single folder as part of a complete archive of Pubchem on ftp://ftp.ncbi.nlm.nih.gov/pubchem/Bioassay. Downloads were completed over the course of a few days around 2025-02-09. Source: National Library of Medicine.</description>
<size>77505843200</size>
</item><item>
<title>Amigoscode - Microservices and Distributed Systems</title>
<category>Course</category>
<infohash>823f6110c670bd970031d6c731db0f06628df784</infohash>
<guid>https://academictorrents.com/details/823f6110c670bd970031d6c731db0f06628df784</guid>
<link>https://academictorrents.com/details/823f6110c670bd970031d6c731db0f06628df784</link>
<description>Welcome to Microservices and Distributed Systems - one of the most in-demand courses on the platform and for a good reason, microservices are quickly becoming the dominant framework in the field and a critical skill for a professional Java developer. The course will be a lot of hands-on, building a distributed application and learning the exact technology stack that makes a microservice tick. Join me today and master microservices with:​ •20-chapter roadmap made up of over 10 hours of video content: From fundamentals of the architecture to application deployment. •156 lessons: A step-by-step practical guide through every technology. •Microservices Application building: A comprehensive guide to give you all the knowledge needed to deploy components yourself. •Dedicated Discord Channel: Where you will be able to find answers to all your questions and chat with me and the students.</description>
<size>3477871758</size>
</item><item>
<title>AmigosCode - Full Stack Spring Boot &amp; React</title>
<category>Course</category>
<infohash>4952efc54203227c0c79453be575ca7cfd529b6a</infohash>
<guid>https://academictorrents.com/details/4952efc54203227c0c79453be575ca7cfd529b6a</guid>
<link>https://academictorrents.com/details/4952efc54203227c0c79453be575ca7cfd529b6a</link>
<description>Syllabus: - Intro. - Spring Initializr. - IntelliJ and AWS SDK. - AWS Credentials. - Amazon S3 Client. - Creating S3 Bucket. - Saving files to S3 Bucket Implementation. - User Profile Model. - Create in-memory Database . - API &amp; Service Layer Implementation. - Upload Image API. - Check list to upload images (logic). - Facebook Create React App. - React Components and Axios. - Rendering User Profile. - React Dropzone. - Pexels. - UI Logic to send files to backend. - Increate servlet max file size. - Exercise. - Lets implement uploadUserProfileImage(). - Lets test things. - Set user profile image link. - Lets implement downloadImages(). - Implement download images on frontend. - Final touches.- Lets wrap it up.</description>
<size>1752911911</size>
</item><item>
<title>AmigosCode - Database Design &amp; Implementation</title>
<category>Course</category>
<infohash>b28f9ce5a1258610bb29d3b50bd141b07c5de0f6</infohash>
<guid>https://academictorrents.com/details/b28f9ce5a1258610bb29d3b50bd141b07c5de0f6</guid>
<link>https://academictorrents.com/details/b28f9ce5a1258610bb29d3b50bd141b07c5de0f6</link>
<description>Here s a snapshot of what you will learn: Capturing Entities: Understand the importance of accurately identifying and defining the entities in your database. This is the first step towards a well-structured database. Designing ERD: Learn to create Entity Relationship Diagrams, a powerful tool to visualize your database structure and relationships between entities. ERD Cardinalities: Delve into the heart of database relationships with cardinalities, understanding how entities interact with each other. One to One Relationships: Uncover the nuances of one-to-one relationships and how to design them efficiently in your database. One to Many Relationships: Explore the dynamics of one-to-many relationships, a common scenario in database design, and learn to implement them effectively. Many to Many Relationships: Tackle the complexity of many-to-many relationships, learning how to break them down and represent them effectively in your database. Defining Constraints: Learn how to define constraints to maintain the integrity of your data, ensuring your database remains reliable and accurate. Database Normalisation: Understand the principles of database normalisation, a key process that helps eliminate redundancy and improve data integrity. SQL Implementation: Put your design into action with SQL, the standard language for interacting with databases. Learn how to create, manipulate, and query your database using SQL</description>
<size>782263343</size>
</item><item>
<title>AmigosCode - Advanced Databases</title>
<category>Course</category>
<infohash>ff065d6dcca1d82c13b53ab924826bd4d886e093</infohash>
<guid>https://academictorrents.com/details/ff065d6dcca1d82c13b53ab924826bd4d886e093</guid>
<link>https://academictorrents.com/details/ff065d6dcca1d82c13b53ab924826bd4d886e093</link>
<description>Embark on a journey to master advanced database concepts essential for crafting production-grade applications. As a software engineer, understanding these concepts will empower you to develop robust backend applications while gaining a comprehensive understanding of the underlying processes. This course will equip you with knowledge and skills in: Joins: Learn how to effectively link data from two or more tables, a fundamental aspect of relational databases. Indexes: Understand how to optimize your databases for faster, more efficient data retrieval. Transactions: Gain insights into how databases maintain integrity even in the event of system failures. Database Administration: Acquire the skills needed to manage and maintain a database system effectively. Functions / Stored Procedures: Learn how to use these powerful tools to encapsulate and automate common database tasks. Schemas: Understand the role of schemas in organizing database objects and controlling user access. Database Backups: Learn the importance of regular backups and how to implement them to protect your data.</description>
<size>632227901</size>
</item><item>
<title>AmigosCode - Database Design &amp; Implementation</title>
<category>Course</category>
<infohash>e3dab174364e192e31a6ecb60c5712f3d69fcad1</infohash>
<guid>https://academictorrents.com/details/e3dab174364e192e31a6ecb60c5712f3d69fcad1</guid>
<link>https://academictorrents.com/details/e3dab174364e192e31a6ecb60c5712f3d69fcad1</link>
<description>Here s a snapshot of what you will learn: Capturing Entities: Understand the importance of accurately identifying and defining the entities in your database. This is the first step towards a well-structured database. Designing ERD: Learn to create Entity Relationship Diagrams, a powerful tool to visualize your database structure and relationships between entities. ERD Cardinalities: Delve into the heart of database relationships with cardinalities, understanding how entities interact with each other. One to One Relationships: Uncover the nuances of one-to-one relationships and how to design them efficiently in your database. One to Many Relationships: Explore the dynamics of one-to-many relationships, a common scenario in database design, and learn to implement them effectively. Many to Many Relationships: Tackle the complexity of many-to-many relationships, learning how to break them down and represent them effectively in your database. Defining Constraints: Learn how to define constraints to maintain the integrity of your data, ensuring your database remains reliable and accurate. Database Normalisation: Understand the principles of database normalisation, a key process that helps eliminate redundancy and improve data integrity. SQL Implementation: Put your design into action with SQL, the standard language for interacting with databases. Learn how to create, manipulate, and query your database using SQL</description>
<size>632227901</size>
</item><item>
<title>nclimgrid-daily</title>
<category>Dataset</category>
<infohash>3b7d120c33110a0706c9afae714648b6f8e249a7</infohash>
<guid>https://academictorrents.com/details/3b7d120c33110a0706c9afae714648b6f8e249a7</guid>
<link>https://academictorrents.com/details/3b7d120c33110a0706c9afae714648b6f8e249a7</link>
<description>NCEI s nClimGrid-Daily product contains gridded fields and area averages of daily maximum, minimum, and average temperatures (Tmax, Tmin, and Tavg) and daily precipitation amount (Prcp) for the Contiguous United States (CONUS) from January 1, 1951âpresent. The dataset is designed for climate monitoring and other applications that rely on placing event-specific meteorological patterns into a long-term historical context. Data are derived from morning and midnight observations from the Global Historical Climatology Network-daily (GHCNd) dataset, and contain processing techniques that address the spatial and temporal variations that affect the quality and homogeneity of the fields. Contains archive files, does not include access files (archive files should contain full data)</description>
<size>117552332079</size>
</item><item>
<title>loc.gov-benjamin-banneker</title>
<category>Dataset</category>
<infohash>b758963c20d21ff112092944049d81806ff0ed40</infohash>
<guid>https://academictorrents.com/details/b758963c20d21ff112092944049d81806ff0ed40</guid>
<link>https://academictorrents.com/details/b758963c20d21ff112092944049d81806ff0ed40</link>
<description>A mirror of https://guides.loc.gov/benjamin-banneker/digital-resources and all collections linked from that page: * African American Perspectives: Materials Selected from the Rare Book Collection https://www.loc.gov/collections/african-american-perspectives-rare-books/ * George Washington Papers https://www.loc.gov/collections/george-washington-papers/ * Printed Ephemera: Three Centuries of Broadsides and Other Printed Ephemera https://www.loc.gov/collections/broadsides-and-other-printed-ephemera/ * Thomas Jefferson Papers, 1606 to 1827 https://www.loc.gov/collections/thomas-jefferson-papers/ Data captured between 2025-03-23 and 2025-03-25. All data uncompressed is about 170 GiB. Table of contents: * banneker.html: Barebones mirror of https://guides.loc.gov/benjamin-banneker/digital-resources * index/: search results pages for each collection. each page links to many pages in items/. * items/: detail pages like www.loc.gov/item/123abc. each page links to many downloaded files. * pdfs/, jpgs/, ...: all downloaded files segmented by filetype. others/ contains all remaining filetypes of which there are few, such as XML, JP2, HTML, ... Note that many of the files are duplicates in different formats. It was determined that it s best to download all versions than to figure out which version is highest-res and covers all pages of e.g. a scanned book, and that sorting out duplicates would provide negligible storage cost savings.</description>
<size>135169053334</size>
</item><item>
<title>Foundr | Fast-track Your Ecommerce Growth With The AI Accelerator Program</title>
<category>Course</category>
<infohash>c55cdc8407b674acbf31e69e7b41e3a7e79ad71b</infohash>
<guid>https://academictorrents.com/details/c55cdc8407b674acbf31e69e7b41e3a7e79ad71b</guid>
<link>https://academictorrents.com/details/c55cdc8407b674acbf31e69e7b41e3a7e79ad71b</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/QjxHqMy9/AI-Summit.png Foundr - Fast-track Your Ecommerce Growth With The AI Accelerator Program Course details Ready to revolutionize your eCommerce business with AI? Foundr presents the AI Accelerator Program-a cutting-edge, LIVE 4-week intensive workshop designed exclusively for eCommerce entrepreneurs looking to scale faster, automate smarter, and boost profitability like never before! What You ll Learn: - How to integrate AI across marketing, sales, and operations - Choosing the right AI tools to maximize efficiency - Automating workflows for seamless business scaling - Crafting AI-powered marketing campaigns (ads, email, and product visuals) - Using AI-driven frameworks for smarter decision-making - Enhancing product features and expanding into new markets - Speeding up product development and increasing production efficiency General Details: Duration: 7h 17m 26s Updated: 03/2025 Language: English Source: https://foundr.com/pages/ecomm-ai-summit-aap-sp MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2</description>
<size>3512178443</size>
</item><item>
<title>SkillShare | Master AI Content Creation with Minimax Hailoua</title>
<category>Course</category>
<infohash>7ecb1307d9275f14e60fc62342ecc495b87c9bae</infohash>
<guid>https://academictorrents.com/details/7ecb1307d9275f14e60fc62342ecc495b87c9bae</guid>
<link>https://academictorrents.com/details/7ecb1307d9275f14e60fc62342ecc495b87c9bae</link>
<description>Visit &gt;&gt;&gt; http://onehack.us/ https://i.ibb.co/j9XPfr1x/67yhgt.png SkillShare - Master AI Content Creation with Minimax Hailoua About This Class Minimax &amp; Hailouai Is a powerful tool for AI content creation, designed to help you create stunning videos and visuals effortlessly. Key Features: - Generate AI videos with ease - Create high-quality content in minutes - Smart prompt suggestions for beginners Benefits: - Save time and effort - No prior experience needed - Turn ideas into impressive visuals instantly Skills you’ll gain - AI for Creativity &amp; Inspiration - AI &amp; Innovation - AI for Animation &amp; 3D General Details: Duration: 1h 19m 05s Updated: 03/2025 Language: English Source: https://www.skillshare.com/en/classes/master-ai-content-creation-with-minimax-hailouai/190940526 MP4 | Video: AVC, 1920x1080p | Audio: AAC, 44.100 KHz, 2 Ch</description>
<size>773049604</size>
</item><item>
<title>publibrary.sec.usace.army.mil-2025-03-01</title>
<category>Dataset</category>
<infohash>948f0de7aee9ba4e8fbfa3af8125b49de9050d83</infohash>
<guid>https://academictorrents.com/details/948f0de7aee9ba4e8fbfa3af8125b49de9050d83</guid>
<link>https://academictorrents.com/details/948f0de7aee9ba4e8fbfa3af8125b49de9050d83</link>
<description>This is a backup from March 1, 2025 of all public documents available from https://publibrary.sec.usace.army.mil/</description>
<size>20240425811</size>
</item><item>
<title>Hindawi XML Corpus - 2 April, 2024</title>
<category>Dataset</category>
<infohash>d402d0f51e2174d515b8a38d5af81478102a9f12</infohash>
<guid>https://academictorrents.com/details/d402d0f51e2174d515b8a38d5af81478102a9f12</guid>
<link>https://academictorrents.com/details/d402d0f51e2174d515b8a38d5af81478102a9f12</link>
<description>[ The original description of the XML corpus is provided below.] In order to facilitate the use of Hindawi’s content for data mining purposes, Hindawi makes its full corpus of XML content available for download as a single .zip file. This .zip file is organized using a two-level folder structure, first by publication year, then by journal. For example, the folder called "2011" contains subfolders for any journal that has one or more published articles in 2011, and inside each of these folders are individual XML files for these articles. In addition, the downloaded .zip file contains an XML file called contents.xml, which provides an overview of all of the subfolders that exist within the main .zip file. The content of this .zip file is updated on a daily basis, and the XML files contained within this corpus download adhere to the JATS 1.1 DTD. If you have questions about Hindawi’s XML corpus download, please contact help@hindawi.com. [ This corpus is no longer updated or hosted publicly by Wiley. This copy was downloaded on April 2, 2024. ]</description>
<size>6580238477</size>
</item><item>
<title>integrated-global-radiosonde-archive</title>
<category>Dataset</category>
<infohash>0b77070e48703a4bc86327a8c5523e3a41b33b70</infohash>
<guid>https://academictorrents.com/details/0b77070e48703a4bc86327a8c5523e3a41b33b70</guid>
<link>https://academictorrents.com/details/0b77070e48703a4bc86327a8c5523e3a41b33b70</link>
<description>The Integrated Global Radiosonde Archive (IGRA) consists of radiosonde and pilot balloon observations from more than 2,800 globally distributed stations. The earliest data date back to 1905, and recent data become available in near real time from about 800 stations worldwide. Observations are available at standard and variable pressure levels, fixed and variable-height wind levels, and the surface and tropopause. Variables include pressure, temperature, geopotential height, relative humidity, dew point depression, wind direction and speed, and elapsed time since launch. IGRA consists of three components: Individual soundings, organized into one file per station Monthly means, organized into one file per variable and time of day (0000 and 1200 Coordinated Universal Time) Sounding-derived parameters, organized into one file per station NCEI also provides access to IGRA station metadata that can be helpful for interpreting data. They include current station names and locations as well as information on changes in station location, instrumentation, and observing practices over time, to the extent that they are available. The IGRA period of record varies for each station and variable. Approximately 800 of the more than 2,800 IGRA stations are currently reporting data. Vertical extent as well as temporal and vertical resolution also vary among stations and over time. Recommended Uses and Limitations IGRA can be used as input to air pollution models, for studies of the detailed vertical structure of the troposphere and lower stratosphere, for assessing the atmospheric conditions during a particular meteorological event, and for many other analyses and operational applications. NCEI scientists have applied a comprehensive set of quality control procedures to the data to remove gross errors. However, the data may still include jumps and other discontinuities caused by changes in instrumentation, observing practice, or station location. Users studying long-term trends may prefer to use the NOAA Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) or one of the available non-NOAA IGRA-derived, homogeneity-adjusted radiosonde datasets.</description>
<size>38673375312</size>
</item><item>
<title>global-historical-climatology-network-daily</title>
<category>Dataset</category>
<infohash>db8f36fcce4134d2f7891c6d4347c1989f215bae</infohash>
<guid>https://academictorrents.com/details/db8f36fcce4134d2f7891c6d4347c1989f215bae</guid>
<link>https://academictorrents.com/details/db8f36fcce4134d2f7891c6d4347c1989f215bae</link>
<description>The Global Historical Climatology Network daily (GHCNd) is an integrated database of daily climate summaries from land surface stations across the globe. GHCNd is made up of daily climate records from numerous sources that have been integrated and subjected to a common suite of quality assurance reviews. GHCNd contains records from more than 100,000 stations in 180 countries and territories. NCEI provides numerous daily variables, including maximum and minimum temperature, total daily precipitation, snowfall, and snow depth. About half the stations only report precipitation. Both record length and period of record vary by station and cover intervals ranging from less than a year to more than 175 years. Note: Description on torrent includes unrelated information, ignore torrent file comment and use this one instead.</description>
<size>166392233421</size>
</item><item>
<title>nih-pub-pmc-oa-manuscript-historical</title>
<category>Dataset</category>
<infohash>1eff24113fe7c99b696c3e6d5bb3de0f174ac378</infohash>
<guid>https://academictorrents.com/details/1eff24113fe7c99b696c3e6d5bb3de0f174ac378</guid>
<link>https://academictorrents.com/details/1eff24113fe7c99b696c3e6d5bb3de0f174ac378</link>
<description>The PMC Open Access Subset (or PMC OA Subset) contains millions of full-text open access article files made available under a Creative Commons or similar license terms or with publisher permission. This dataset includes retractions, corrections, and expressions of concern*. Also included are select articles from the PMC COVID-19 Collection that continue to be made available under terms that allow for secondary analysis and reuse. The Author Manuscript Dataset consists of full-text files of hundreds of thousands accepted author manuscripts (AAMs) that have been made available in PMC under a partner funder s policy. This dataset includes retractions, corrections, and expressions of concern*. Full-text files of OCR d text from articles published in the 18th, 19th, and 20th centuries added to PMC as part of an NLM Digitization Project.</description>
<size>260986537917</size>
</item><item>
<title>WA_Apps1and2_book.pdf</title>
<category>Paper</category>
<infohash>95409914dddb0542306ca16843429bb94dd3ef96</infohash>
<guid>https://academictorrents.com/details/95409914dddb0542306ca16843429bb94dd3ef96</guid>
<link>https://academictorrents.com/details/95409914dddb0542306ca16843429bb94dd3ef96</link>
<description>Textbook for Western Australia Year 11 Mathematics Applications</description>
<size>17548570</size>
</item><item>
<title>ChemY11.pdf</title>
<category>Paper</category>
<infohash>5f838a65a7bcb0ada6c3e31c405c38c116b6d61a</infohash>
<guid>https://academictorrents.com/details/5f838a65a7bcb0ada6c3e31c405c38c116b6d61a</guid>
<link>https://academictorrents.com/details/5f838a65a7bcb0ada6c3e31c405c38c116b6d61a</link>
<description>Textbook for Western Australia Year 11 Chemistry</description>
<size>278972398</size>
</item><item>
<title>PhysY11.pdf</title>
<category>Paper</category>
<infohash>cf87f89c9a6ebf538db09404fa7eb6cd1c86de47</infohash>
<guid>https://academictorrents.com/details/cf87f89c9a6ebf538db09404fa7eb6cd1c86de47</guid>
<link>https://academictorrents.com/details/cf87f89c9a6ebf538db09404fa7eb6cd1c86de47</link>
<description>2nd Edition Textbook for Western Australia Year 11 Physics</description>
<size>230272810</size>
</item><item>
<title>usip-youtube</title>
<category>Dataset</category>
<infohash>5670c1ce39d682eea03481e91e949509e871d549</infohash>
<guid>https://academictorrents.com/details/5670c1ce39d682eea03481e91e949509e871d549</guid>
<link>https://academictorrents.com/details/5670c1ce39d682eea03481e91e949509e871d549</link>
<description>A mirror of https://www.youtube.com/c/UnitedStatesInstituteofPeace Note this also (at least partially) contains USIP podcasting material. Video files are tarred up because some of the titles contain non-ASCII characters, to increase compatibility across torrent clients and filesystems Data captured from 2025-03-19 to 2025-03-21</description>
<size>292992522850</size>
</item><item>
<title>National Library of Medicine - Publications and Productions (Full Collection)</title>
<category>Dataset</category>
<infohash>525e372c1c8b2a1c40fb9259b069df55e5ec0da5</infohash>
<guid>https://academictorrents.com/details/525e372c1c8b2a1c40fb9259b069df55e5ec0da5</guid>
<link>https://academictorrents.com/details/525e372c1c8b2a1c40fb9259b069df55e5ec0da5</link>
<description>This is a full download of the entire Publications and Productions digital collection from the National Library of Medicine as it exists today. This archive contains documents and other media (videos, etc.) produced by the NLM. Each item is paired with a metadata file and may be included in multiple formats. Directories are named with the title of the item they contain (with special characters removed and occasionally truncated to keep the archive portable across different operating systems). Full, unedited title, author, and license information can be found within the metadata files. This archive is part of the Safeguarding Research project: https://safeguarding-research.discourse.group Note: You will need to use 7zip to open this archive. Description of the collection from the NLM: The NLM Publications and Productions Collection spans the 1860s to the present and includes documents published by the National Library of Medicine and its predecessors: the Library of the Surgeon General’s Office (1836-1922), the Army Medical Library (1922-1952), and the Armed Forces Medical Library (1952-1956). NLM-created documents and videos in this collection include interviews, biographies, reports, and lectures. Highlights include: • The medical and surgical history of the war of the rebellion (1861-65) in two volumes in six parts, including illustrations and maps; • first edition of the Army Medical Library Classification (1951), for arrangement of monographs on medicine and related sciences, designed for use in conjunction with the Library of Congress Classification; • the film, Dedication ceremonies, National Library of Medicine: December 14, 1961, a record of the dedication ceremonies for the new building of the National Library of Medicine, held on December 14, 1961; • the study, design, and implementation of MEDLARS (Medical Literature Analysis and Retrieval System) in 1964 chronicled in The MEDLARS story at the National Library of Medicine; • a catalog named for and based on the NLM exhibition, Dream Anatomy, originally showcased in 2002-2003 and featuring exceptional illustrations and portraits.</description>
<size>116367176390</size>
</item><item>
<title>nsrdb_1998.h5</title>
<category>Dataset</category>
<infohash>9d215e6df89ab122884f6936bd7750c0e6133deb</infohash>
<guid>https://academictorrents.com/details/9d215e6df89ab122884f6936bd7750c0e6133deb</guid>
<link>https://academictorrents.com/details/9d215e6df89ab122884f6936bd7750c0e6133deb</link>
<description>Partial backup of https://data.openei.org/s3_viewer?bucket=nrel-pds-nsrdb&amp;prefix=v3%2F</description>
<size>1709447055856</size>
</item><item>
<title>US Dept. of Education Institute of Museum and Library Services YouTube videos</title>
<category>Dataset</category>
<infohash>d513e232afb3b35fbbbe8f0f6f8a5cdbea6ad744</infohash>
<guid>https://academictorrents.com/details/d513e232afb3b35fbbbe8f0f6f8a5cdbea6ad744</guid>
<link>https://academictorrents.com/details/d513e232afb3b35fbbbe8f0f6f8a5cdbea6ad744</link>
<description>This is every public video from the US Dept. of Education s Institute of Museum and Library Services YouTube account, as of 2025-03-19.  Filenames reflect video titles, with special characters replaced by "-". Content is likely to be in the public domain, but not confirmed.</description>
<size>39875078445</size>
</item><item>
<title>eric.ed.gov 02-10 dump</title>
<category>Dataset</category>
<infohash>26b59e44b35afdb71cac74552b566530f318238a</infohash>
<guid>https://academictorrents.com/details/26b59e44b35afdb71cac74552b566530f318238a</guid>
<link>https://academictorrents.com/details/26b59e44b35afdb71cac74552b566530f318238a</link>
<description>This dataset contains a dump of https://eric.ed.gov/. DB API dump from 02/10/25.  Dump of full text downloads was fully finished on 03/09/25 (Finished a week or two before but had some duplicates due to code errors).  Total of 506,576 articles. It comprises of ERIC-JSON-API-Dump-of-DB.json which is a api dump of the sites database with entry Id s (ED/EJxxxxx), descriptions, and authors, paper links, etc. You can search this with something like notepad++ or your browser but file is quite large but will open just slowly. The rest of the files are a dump of all pdf journal and non journal full text articles (articles listed in DB as including full text in e_fulltextauth field with a value of 1). More information abou t the api is available with field descriptions at https://eric.ed.gov/?api#/default/get_eric_</description>
<size>643208028131</size>
</item><item>
<title>Monitoring Trends in Burn Severity</title>
<category>Dataset</category>
<infohash>5bb1457c6c2f591f0e9e9b84e239f55cbad68b78</infohash>
<guid>https://academictorrents.com/details/5bb1457c6c2f591f0e9e9b84e239f55cbad68b78</guid>
<link>https://academictorrents.com/details/5bb1457c6c2f591f0e9e9b84e239f55cbad68b78</link>
<description>Monitoring Trends in Burn Severity (MTBS) – A program implemented in 2005 and conducted jointly by the Forest Service and Department of the Interior to map the location, extent and associated burn severity of all large fires in the United States. The program generates a suite of geospatial data for targeted fires occuring across all ownerships from 1984 to presentand are intended to meet numerous policy, operational and research needs. MTBS is an interagency program whose goal is to consistently map the burn severity and extent of large fires across all lands of the United States from 1984 to present. This includes all fires 1,000 acres or greater in the western United States and 500 acres or greater in the eastern Unites States. The extent of coverage includes the continental U.S., Alaska, Hawaii and Puerto Rico. README: Downloaded these three directories (composite_data, mtbs_fod_pts_data, mtbs_perimeter_data) from https://www.mtbs.gov/direct-download on March 3, 2025 as part of the data rescue effort for the Stanford Doerr School of Sustainability. I clicked the download link for the two National Datasets: the Fire Occurrence Dataset and the Burned Areas Boundaries Dataset. I didn t change the Date Range slider, so the time coverage is 1984-2024. These downloads are in the mtbs_fod_pts_data and mtbs_perimeter_data directories, respectively. I downloaded data under the Burn Severity Mosaics tab by trying to select all for the years 2022-2024. Data through 2021 is available at https://developers.google.com/earth-engine/datasets/catalog/USFS_GTAC_MTBS_annual_burn_severity_mosaics_v1 (Google Earth Engine), so I tried to download only 2022-2024, but still ended up with data from 2011-2024. These are in the composite_data directory.</description>
<size>850272766</size>
</item><item>
<title>USPTO - PatentsView Database Tables</title>
<category>Dataset</category>
<infohash>2c6eb904b11a8e188c59e5e5ffdd06562950d84b</infohash>
<guid>https://academictorrents.com/details/2c6eb904b11a8e188c59e5e5ffdd06562950d84b</guid>
<link>https://academictorrents.com/details/2c6eb904b11a8e188c59e5e5ffdd06562950d84b</link>
<description>PatentsView (description below) will go offline on March 28th. This torrent includes all bulk downloadable tables from: https://patentsview.org/download/data-download-tables, along with the data dictionaries and the published logic diagram. Zip files contain tab-delimited files that are considerably larger than the zip files when uncompressed. The data includes patent activity from 1976 to 2024. Description: PatentsView is an award-winning visualization, data dissemination, and analysis platform that focuses on intellectual property (IP) data. Support for the site and the team that works on it comes from the Office of the Chief Economist at the U.S. Patent &amp; Trademark Office (USPTO). PatentsView serves students, educators, researchers, policymakers, small business owners, and the public. It offers a unique and valuable open data platform providing free data dissemination and value-added analyses to foster better knowledge of the IP system and drive new insights into invention and innovation.</description>
<size>249222268317</size>
</item><item>
<title>US Institute of Peace website: Publications</title>
<category>Dataset</category>
<infohash>8fe6ae3733bdeee629b1b5571c6a1379f3a99647</infohash>
<guid>https://academictorrents.com/details/8fe6ae3733bdeee629b1b5571c6a1379f3a99647</guid>
<link>https://academictorrents.com/details/8fe6ae3733bdeee629b1b5571c6a1379f3a99647</link>
<description>This is a limited scrape of  www.usip.org  just before it went offline on 2025-03-19. This archive contains only the publications ( www.usip.org/publications/ ), specifically: 1. HTML of search pages listing all publications 2. HTML of publication text for everything linked from search results 3. Downloads of approximately 94% of the files referenced in those articles (images mostly, but also PDFs) There is no CSS or JS to format articles, and many image and file links will be broken as they include the base URL (though the files may still be available in the archive!).  The archive does not include any of the other sections of the site, which included grant information, information for educators, and various about pages.  A YouTube page, podcast, and social media accounts associated with USIP were also not included here. Since USIP is (was?) an independent organization and not an official US Government function (regardless of the opinions of said government), the copyright on these files is almost certainly held by the organization and/or the individual article authors.</description>
<size>2404299620</size>
</item><item>
<title>catalog.archives.gov-indian-affairs</title>
<category>Dataset</category>
<infohash>e94e7f0635b47c3eeb41ef013a2c0dd1e6e44011</infohash>
<guid>https://academictorrents.com/details/e94e7f0635b47c3eeb41ef013a2c0dd1e6e44011</guid>
<link>https://academictorrents.com/details/e94e7f0635b47c3eeb41ef013a2c0dd1e6e44011</link>
<description>A download of catalog.archives.gov filtered to the search term "BUREAU OF INDIAN AFFAIRS" Captured around 2025-02-08. search-results.tar.zst contains all JSON metadata that would be available on search results pages. This includes descriptions, authorship, year of each record found, and a list of download URLs for PDFs etc. It s best to download those files first to determine whether this dataset contains something specific you need. download-urls.txt.zst contains the list of AWS S3 urls that were downloaded into the following folders: tifs.tar.zst contains all TIFF files, 3.2 TiB uncompressed. jpgs.tar.zst contains all JPEG files, 12 TiB uncompressed. others.tar.zst contains all other files, 1 TIB uncompressed.</description>
<size>15284781260916</size>
</item><item>
<title>Expanded atlas of the sky in continuous gravitational waves</title>
<category>Dataset</category>
<infohash>c3db0a59e05e8b4c25b88ab833a24693a99f29aa</infohash>
<guid>https://academictorrents.com/details/c3db0a59e05e8b4c25b88ab833a24693a99f29aa</guid>
<link>https://academictorrents.com/details/c3db0a59e05e8b4c25b88ab833a24693a99f29aa</link>
<description>We present the full release of the atlas of continuous gravitational waves, covering frequencies from 20 Hz to 1700 Hz and spindowns from -5e-10 to 5e-10 Hz/s. Compared to the early atlas release, we have extended the frequency range and have performed follow-up on the outliers. Conducting continuous wave searches is computationally intensive and time-consuming. The atlas facilitates the execution of new searches with relatively minimal computing resources.</description>
<size>1511388271936</size>
</item><item>
<title>digitalarchive.wilsoncenter.org</title>
<category>Dataset</category>
<infohash>acc5c7786d99532662e3f3ce184e5cede43f582c</infohash>
<guid>https://academictorrents.com/details/acc5c7786d99532662e3f3ce184e5cede43f582c</guid>
<link>https://academictorrents.com/details/acc5c7786d99532662e3f3ce184e5cede43f582c</link>
<description>A complete mirror of https://digitalarchive.wilsoncenter.org/search, including all PDF downloads Data captured around 2025-03-16</description>
<size>11382570832</size>
</item><item>
<title>imls.gov-publications</title>
<category>Dataset</category>
<infohash>927d308963b9534562a2a4c61fb039a6d15188ec</infohash>
<guid>https://academictorrents.com/details/927d308963b9534562a2a4c61fb039a6d15188ec</guid>
<link>https://academictorrents.com/details/927d308963b9534562a2a4c61fb039a6d15188ec</link>
<description>A mirror of https://www.imls.gov/research-evaluation/additional-resources/publications Captured on 2025-03-17 www.imls.gov/sites &amp;mdash; the actual PDF and JPEG files www.imls.gov/publications &amp;mdash; HTML metadata page for each publication www.imls.gov/research-evaluation &amp;mdash; search results downloads</description>
<size>648156849</size>
</item><item>
<title>fmcs.gov-resources</title>
<category>Dataset</category>
<infohash>28c5b5ac7fa2c6800d3e948bd3b4ee3781764cca</infohash>
<guid>https://academictorrents.com/details/28c5b5ac7fa2c6800d3e948bd3b4ee3781764cca</guid>
<link>https://academictorrents.com/details/28c5b5ac7fa2c6800d3e948bd3b4ee3781764cca</link>
<description>Mirror of https://www.fmcs.gov/resources/, all subpages (plain HTML, no images or CSS), zipped up to documents.tar.zst Includes a copy of the fmcsinfo YouTube channel in videos/ Captured on 2025-03-16</description>
<size>264795402</size>
</item><item>
<title>Reddit comments/submissions 2025-02</title>
<category>Dataset</category>
<infohash>bfddd38b2bbc6f09ce6d52eac4fcf155371d4635</infohash>
<guid>https://academictorrents.com/details/bfddd38b2bbc6f09ce6d52eac4fcf155371d4635</guid>
<link>https://academictorrents.com/details/bfddd38b2bbc6f09ce6d52eac4fcf155371d4635</link>
<description>Reddit comments and submissions from 2025-02 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</description>
<size>52375230192</size>
</item><item>
<title>SNOTEL_reports</title>
<category>Dataset</category>
<infohash>70ac418502d9068e3f0cd356b6c3cc2a805fce1d</infohash>
<guid>https://academictorrents.com/details/70ac418502d9068e3f0cd356b6c3cc2a805fce1d</guid>
<link>https://academictorrents.com/details/70ac418502d9068e3f0cd356b6c3cc2a805fce1d</link>
<description>USDA SNOTEL reports, updated 12 Mar 2025.  From repository README: This tiny script grabs USDA SNOTEL reports for safekeeping and re-analysis. SNOTEL monitors snow and water across the (western) US. https://www.nrcs.usda.gov/resources/data-and-reports/snow-and-water-interactive-map The data downloaded are the historical reports summarizing daily values of snow pack and additional metrics. Reports lag real-time values with a couple of weeks. The script generates one big table with 12M rows and 19 columns (variables). Data can be post-processed for both the timing of snow accummulation and amount using the snotelr R package. The data included in this repository as a compressed  snotel_reports.rds  file was last updated 12 Mar. 2025. Use the R  readRDS()  function to access the data.    r snotel_data &lt;- readRDS("snotel_reports.rds")     &gt; [!NOTE] &gt; Please cite the data from the USDA the National Water and Climate Center as: &gt; &gt; United States, US Department of Agriculture, Natural Resource Conservation Service, National Water and Climate Center. (2024, February 29). Air and Water Database. Water and Climate Information System. https://nwcc-apps.sc.egov.usda.gov/ &gt; &gt; Schaefer, G. L., &amp; Paetzold, R. F. (2001). SNOTEL (SNOwpack TELemetry) and SCAN (soil climate analysis network). Automated weather stations for applications in agriculture and water resources management: current use and future perspectives, 1074, 187-194.</description>
<size>45670428</size>
</item><item>
<title>March 2025 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>e0eda0104902d61c025e27e4846b66491d4c9f98</infohash>
<guid>https://academictorrents.com/details/e0eda0104902d61c025e27e4846b66491d4c9f98</guid>
<link>https://academictorrents.com/details/e0eda0104902d61c025e27e4846b66491d4c9f98</link>
<description>Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through March 2025 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org And, if you do use our API, we encourage you to read the section of the documentation on "etiquette". That is, how to use the API without making it impossible for others to use.</description>
<size>196939181654</size>
</item><item>
<title>[Sample Dataset] March 2025 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>3e5545bb8d34b57a50181dfed0d80e88da066045</infohash>
<guid>https://academictorrents.com/details/3e5545bb8d34b57a50181dfed0d80e88da066045</guid>
<link>https://academictorrents.com/details/3e5545bb8d34b57a50181dfed0d80e88da066045</link>
<description>[Sample Dataset] March 2025 Public Data File from Crossref. This dataset includes 100 random JSON records from the Crossref metadata corpus.</description>
<size>24326067</size>
</item><item>
<title>PubChem3D 10 conformers XML</title>
<category>Dataset</category>
<infohash>3f14fd37df52729b3e7975b0339dce64e46e44c6</infohash>
<guid>https://academictorrents.com/details/3f14fd37df52729b3e7975b0339dce64e46e44c6</guid>
<link>https://academictorrents.com/details/3f14fd37df52729b3e7975b0339dce64e46e44c6</link>
<description>This is an archive from PubChem, https://pubchem.ncbi.nlm.nih.gov/ This archive contains only the PubChem3D project, with 10 diverse conformers per compound, in XML format.  This corresponds to the backend FTP folder  ftp.ncbi.nlm.nih.gov/pubchem/Compound_3D/10_conf_per_cmpd/XML .  There are companion torrents for the [ASN.1](https://academictorrents.com/details/da0f75a319889459290b606c417529d540bbce25) and [SDF](https://academictorrents.com/details/e8858a82b1576251f822443eb02b7c3dc225e123) formatted data. From the README, present in the [base torrent for PubChem3D](https://academictorrents.com/details/fc044237ff0c901ce32db5d51a71a71df8b9f07f):</description>
<size>1858937926848</size>
</item><item>
<title>PubChem3D 10 conformers SDF</title>
<category>Dataset</category>
<infohash>e8858a82b1576251f822443eb02b7c3dc225e123</infohash>
<guid>https://academictorrents.com/details/e8858a82b1576251f822443eb02b7c3dc225e123</guid>
<link>https://academictorrents.com/details/e8858a82b1576251f822443eb02b7c3dc225e123</link>
<description>This is an archive from PubChem, https://pubchem.ncbi.nlm.nih.gov/ This archive contains only the PubChem3D project, with 10 diverse conformers per compound, in SDF format.  This corresponds to the backend FTP folder  ftp.ncbi.nlm.nih.gov/pubchem/Compound_3D/10_conf_per_cmpd/SDF .  There are companion torrents for the [ASN.1](https://academictorrents.com/details/da0f75a319889459290b606c417529d540bbce25) and [XML](https://academictorrents.com/details/3f14fd37df52729b3e7975b0339dce64e46e44c6) formatted data. From the README, present in the [base torrent for PubChem3D](https://academictorrents.com/details/fc044237ff0c901ce32db5d51a71a71df8b9f07f):</description>
<size>1286868903878</size>
</item><item>
<title>PubChem3D 10 conformers ASN.1</title>
<category>Dataset</category>
<infohash>da0f75a319889459290b606c417529d540bbce25</infohash>
<guid>https://academictorrents.com/details/da0f75a319889459290b606c417529d540bbce25</guid>
<link>https://academictorrents.com/details/da0f75a319889459290b606c417529d540bbce25</link>
<description>This is an archive from PubChem, https://pubchem.ncbi.nlm.nih.gov/ This archive contains only the PubChem3D project, with 10 diverse conformers per compound, in ASN.1 format.  This corresponds to the backend FTP folder  ftp.ncbi.nlm.nih.gov/pubchem/Compound_3D/10_conf_per_cmpd/ASN .  There are companion torrents for the [SDF](https://academictorrents.com/details/e8858a82b1576251f822443eb02b7c3dc225e123) and [XML](https://academictorrents.com/details/3f14fd37df52729b3e7975b0339dce64e46e44c6) formatted data. From the README, present in the [base torrent for PubChem3D](https://academictorrents.com/details/fc044237ff0c901ce32db5d51a71a71df8b9f07f):</description>
<size>1217438170850</size>
</item><item>
<title>PubChem3D Base</title>
<category>Dataset</category>
<infohash>fc044237ff0c901ce32db5d51a71a71df8b9f07f</infohash>
<guid>https://academictorrents.com/details/fc044237ff0c901ce32db5d51a71a71df8b9f07f</guid>
<link>https://academictorrents.com/details/fc044237ff0c901ce32db5d51a71a71df8b9f07f</link>
<description>This is an archive from PubChem, https://pubchem.ncbi.nlm.nih.gov/ This archive contains only the PubChem3D project, without the archive of "10 diverse conformers per compound" as this is much larger than the rest of the archive combined.  It corresponds to the backend FTP folder  ftp.ncbi.nlm.nih.gov/pubchem/Compound_3D , without the subfolder  10_conf_per_cmpd .  There are companion torrents for the contents of that folder, split into downloads in [ASN.1](https://academictorrents.com/details/da0f75a319889459290b606c417529d540bbce25), [SDF](https://academictorrents.com/details/e8858a82b1576251f822443eb02b7c3dc225e123), and [XML](https://academictorrents.com/details/3f14fd37df52729b3e7975b0339dce64e46e44c6) format. From the README file in the folder root:</description>
<size>1624607359267</size>
</item><item>
<title>usace-corpsconnection-youtube</title>
<category>Dataset</category>
<infohash>7c560060f72118bce304a250270ebc9798443329</infohash>
<guid>https://academictorrents.com/details/7c560060f72118bce304a250270ebc9798443329</guid>
<link>https://academictorrents.com/details/7c560060f72118bce304a250270ebc9798443329</link>
<description>CORPSCONNECTION YouTube channel A copy of all videos on https://www.youtube.com/CORPSCONNECTION captured on 2025-03-09 command: yt-dlp &amp;mdash;write-json-info https://www.youtube.com/CORPSCONNECTION</description>
<size>81236882297</size>
</item><item>
<title>hprcc.unl.edu</title>
<category>Dataset</category>
<infohash>5ff6da03bd4042c7c9b93d6bc5ed86b5c21dbe21</infohash>
<guid>https://academictorrents.com/details/5ff6da03bd4042c7c9b93d6bc5ed86b5c21dbe21</guid>
<link>https://academictorrents.com/details/5ff6da03bd4042c7c9b93d6bc5ed86b5c21dbe21</link>
<description>full mirror of hprcc.unl.edu, captured on 2025-02-06</description>
<size>1090855898</size>
</item><item>
<title>www.nsa.gov</title>
<category>Dataset</category>
<infohash>cdc6a1f52015fa043f60ed9a7dc01e9b9f702e86</infohash>
<guid>https://academictorrents.com/details/cdc6a1f52015fa043f60ed9a7dc01e9b9f702e86</guid>
<link>https://academictorrents.com/details/cdc6a1f52015fa043f60ed9a7dc01e9b9f702e86</link>
<description>Full mirror of www.nsa.gov, captured on 2025-02-10.</description>
<size>13268239115</size>
</item><item>
<title>US National Archives Catalog, 2025-02-21</title>
<category>Dataset</category>
<infohash>96b136356b9729b49500fff8df4f1cd6616a4337</infohash>
<guid>https://academictorrents.com/details/96b136356b9729b49500fff8df4f1cd6616a4337</guid>
<link>https://academictorrents.com/details/96b136356b9729b49500fff8df4f1cd6616a4337</link>
<description>NOTE: contents decompress to ~405GB! This is the public catalog of the United States National Archives, searchable at https://catalog.archives.gov/. As a work of the United States Government, it is in the public domain (though some of the items in the archives themselves are covered by copyright).  This is a capture of the catalog as of 2025-02-21: it consists of collections and record groups in the JSON format, with the former indexing into the latter.  Entries usually contain brief descriptions of the associated item, and any entries with online components (e.g., scans or documents) additionally list the URLs of any associated digital objects.  Entries for photos of documents usually contain (inaccurate) OCR transcriptions of the visible text. This is quite similar to catalog archives offered by the National Archives and Records Administration (NARA) itself through Amazon S3: * Amazon site: https://registry.opendata.aws/nara-national-archives-catalog/ * NARA site: https://www.archives.gov/developer/national-archives-catalog-dataset However, the "working" catalog present in the same S3 bucket seems to contain newer information than the latest archive available for download.  This is a capture of the "working" catalog, with updates as of 2025-02-21, rather than 2024-09-12.</description>
<size>49658930856</size>
</item><item>
<title>Reddit comments/submissions 2025-02</title>
<category>Dataset</category>
<infohash>2f873e0b15da5ee29b63e586c0ab1dedd3508870</infohash>
<guid>https://academictorrents.com/details/2f873e0b15da5ee29b63e586c0ab1dedd3508870</guid>
<link>https://academictorrents.com/details/2f873e0b15da5ee29b63e586c0ab1dedd3508870</link>
<description>Reddit comments and submissions from 2025-02 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>51490445723</size>
</item><item>
<title>irma.nps.gov-datastore</title>
<category>Dataset</category>
<infohash>198bd721c54aa5bf426fd8fbfd78918de2fa81a2</infohash>
<guid>https://academictorrents.com/details/198bd721c54aa5bf426fd8fbfd78918de2fa81a2</guid>
<link>https://academictorrents.com/details/198bd721c54aa5bf426fd8fbfd78918de2fa81a2</link>
<description># IRMA NPS DataStore A mirror of https://irma.nps.gov/DataStore/Search/Quick &amp;mdash; sent a search for the empty string, and crawled through all results and file downloads. Data captured on 2025-03-04 This archive contains 3317 pages of search results, amounting to 165806 records ("references" in DataStore lingo) pages.tar.zst contains all pages from search results, in JSON format. profiles.tar.zst contains JSON metadata (description, date published, author) for each reference. holdings.tar.zst contains JSON file listings per reference, i.e. file MIME-types and sizes. html.tar.zst pdfs.tar.zst extracted-zip.tar.zst and others.tar.zst are the actual downloaded files segmented by filetypes for compressability. extracted-zip.tar.zst does not contain the original zipfiles but rather extracted folders so that they can be more effectively recompressed by ZStandard. The original total size of all zipfiles was 2.2 TiB, all data fully extracted was 3.1 TiB. Detailed code for how the data was scraped is available in steps.txt. Data is packed in ZStandard-compressed tarballs with -9 &amp;mdash;long to reduce torrent metadata and disk usage.</description>
<size>2907396996425</size>
</item><item>
<title>enwiki-20250301-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>517bd4636dbb4b148374145e26c20f61ac63c093</infohash>
<guid>https://academictorrents.com/details/517bd4636dbb4b148374145e26c20f61ac63c093</guid>
<link>https://academictorrents.com/details/517bd4636dbb4b148374145e26c20f61ac63c093</link>
<description>English Wikipedia Multistream 2025-03-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24749684048</size>
</item><item>
<title>usace.contentdm.oclc.org</title>
<category>Dataset</category>
<infohash>3c538db14147395dcd3866a405d37bb58e5e7a05</infohash>
<guid>https://academictorrents.com/details/3c538db14147395dcd3866a405d37bb58e5e7a05</guid>
<link>https://academictorrents.com/details/3c538db14147395dcd3866a405d37bb58e5e7a05</link>
<description>**U.S. Army Corps of Engineers Digital Library** An almost complete mirror of https://usace.contentdm.oclc.org/ Data captured from 2025-02-28 to 2025-03-02 Metadata is downloaded in JSON format and is available in pages.tar.zst and items.tar.zst Downloads are available segmented by filetype in other .tar.zst folders: pdfs.tar.zst contains only PDF files, jp2s.tar.zst contains only JPEG 2000 files, and so on. download-urls.txt.zst and item-links.txt.zst are intermediate artifacts from scraping. steps.txt contains the shell scripts used to produce this dataset.</description>
<size>621813944024</size>
</item><item>
<title>Reactive GraphQL Masterclass For Java Spring Boot Developers.zip</title>
<category>Course</category>
<infohash>040c932c09ebe34783a4e224df0c616a251d6e9d</infohash>
<guid>https://academictorrents.com/details/040c932c09ebe34783a4e224df0c616a251d6e9d</guid>
<link>https://academictorrents.com/details/040c932c09ebe34783a4e224df0c616a251d6e9d</link>
<description># 🚀 Master GraphQL and Spring WebFlux for Reactive Microservices ## Course Overview: This comprehensive course equips you with the skills to build modern, reactive microservices using GraphQL and Spring WebFlux. You ll gain deep knowledge of GraphQL s query language, schema design, and seamless integration with Spring WebFlux, along with advanced concepts like real-time subscriptions, input validation, and testing strategies. ## 🎯 What You’ll Learn: - GraphQL fundamentals – principles, query language, schema design - How GraphQL differs from REST - Building reactive microservices with Spring WebFlux - Designing robust GraphQL APIs - Handling nested objects, custom types, and input validation - Solving the N+1 problem and optimizing data fetching - Implementing real-time updates with GraphQL Subscriptions - Error handling and exception management - Caching, directives, fragments, and advanced schema features - CRUD operations with GraphQL - Integrating GraphQL clients and WebSocket communication - Writing integration tests for GraphQL APIs ## 🔥 Course Structure: - Total Length: 12h 40m - Sections: 11 - Lectures: 176 ## 🏗️ Key Modules: - GraphQL Basics and Spring WebFlux Setup - QueryMapping, Mutations, and Nested Objects - Data Fetching, Resolvers, and Field Selections - Solving N+1 Problem and Optimizing Queries - Directives, Aliases, Variables, and Fragments - CRUD with GraphQL and DTO/Entity Design - Subscriptions for Real-Time Data - Exception Handling and Error Extensions - GraphQL Client Implementations (WebClient, WebSocket) - Integration Testing for GraphQL APIs - External Services Integration and Full Application Demo ## ✅ By the End of This Course: You ll be able to: - Confidently build APIs with GraphQL - Design effective GraphQL schemas - Develop reactive microservices with Spring WebFlux - Build real-time applications with GraphQL Subscriptions - Implement robust validation and error handling - Write effective tests for GraphQL APIs</description>
<size>4958455133</size>
</item><item>
<title>National Library of Medicine - Health Policy and Services Research (Full Collection)</title>
<category>Paper</category>
<infohash>3d4c70892de2d3ff3b0871aa80786ade3f82231b</infohash>
<guid>https://academictorrents.com/details/3d4c70892de2d3ff3b0871aa80786ade3f82231b</guid>
<link>https://academictorrents.com/details/3d4c70892de2d3ff3b0871aa80786ade3f82231b</link>
<description>This is a full download of the entire Health Policy and Services Research digital collection from the National Library of Medicine as it exists today. Each paper in the collection is included in both PDF and plaintext formats, along with an associated metadata file. License, author, and other information for each item can be found in that metadata file. This archive is part of the Safeguarding Research project: https://safeguarding-research.discourse.group Description of the collection from the NLM: The Health Policy and Services Research Collection is a curated compilation of gray literature, which is defined by the New York Academy of Medicine as that “produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers." Grey literature can offer a more complete view of the research by providing access to information that may not be found in formally published sources (reports, dissertations, conference abstracts, official documents, research-in-progress, clinical trials, etc). Health services research (HSR) broadly examines issues of cost, access and quality of health care, as well as the impact these aspects may have on the health of a group or an individual.</description>
<size>20075465593</size>
</item><item>
<title>eswiki-20250201-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>1de5141869d85fdca20d137b4d8e5b6281aaf77a</infohash>
<guid>https://academictorrents.com/details/1de5141869d85fdca20d137b4d8e5b6281aaf77a</guid>
<link>https://academictorrents.com/details/1de5141869d85fdca20d137b4d8e5b6281aaf77a</link>
<description>Spanish Wikipedia Multistream 2025-02-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>4909826376</size>
</item><item>
<title>Reddit comments/submissions 2025-01</title>
<category>Dataset</category>
<infohash>4882c5e5772ef0ab461d7318ca3d1f11cd792f34</infohash>
<guid>https://academictorrents.com/details/4882c5e5772ef0ab461d7318ca3d1f11cd792f34</guid>
<link>https://academictorrents.com/details/4882c5e5772ef0ab461d7318ca3d1f11cd792f34</link>
<description>Reddit comments and submissions from 2025-01 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</description>
<size>57290990567</size>
</item><item>
<title>catalog.archives.gov-lgbt</title>
<category>Dataset</category>
<infohash>b67dfb8cc94a98c4dc5e4fd23201df7beb5c2a7c</infohash>
<guid>https://academictorrents.com/details/b67dfb8cc94a98c4dc5e4fd23201df7beb5c2a7c</guid>
<link>https://academictorrents.com/details/b67dfb8cc94a98c4dc5e4fd23201df7beb5c2a7c</link>
<description>A partial mirror of catalog.archives.gov, filtered for the LGBTQ-related keywords found in keywords.txt. The list of keywords was obtained from catalog-links that were in turn manually collected from https://www.archives.gov/research/lgbt For each keyword in keywords.txt, we fire off a search and attempt to download all metadata (folder "search-results") and attachments (folders "tifs", "jpgs", "pdfs", "others"). Folders are packed as ZStandard-compressed tarballs to save space and to reduce overhead in torrent metadata. All data unpacked is approximately 3 TB, tifs being 2.6 TB of that. Overview: search-results.tar.zst contains all JSON metadata that would be available on search results pages. This includes descriptions, authorship, year of each record found, and a list of download URLs for PDFs etc. It s best to download those files first to determine whether this dataset contains something specific you need. pdfs.tar.zst, tifs.tar.zst, jpgs.tar.zst, others.tar.zst contain the actual downloads, segmented by file-type for compression purposes. download-urls.txt.zst contains the list of AWS S3 urls that were downloaded into those folders. generate-urls.py was used to scrape the catalog for metadata. The detailed procedure for scraping is outlined in steps.txt Data captured around 2025-02-23.</description>
<size>1428005290960</size>
</item><item>
<title>Neo4j graph dataset of cycling paths in Slovenia</title>
<category>Dataset</category>
<infohash>47b76c1a670c6c2e71005f70cf606a185c2eb60c</infohash>
<guid>https://academictorrents.com/details/47b76c1a670c6c2e71005f70cf606a185c2eb60c</guid>
<link>https://academictorrents.com/details/47b76c1a670c6c2e71005f70cf606a185c2eb60c</link>
<description>Navigating through a real-world map can be represented in a bi-directed graph with a group of nodes representing the intersections and edges representing the roads between them. In cycling, we can plan training as a group of nodes and edges the athlete must cover. Optimizing routes using artificial intelligence is a well-studied phenomenon. Much work has been done on finding the quickest and shortest paths between two points. In cycling, the solution is not necessarily the shortest and quickest path. However, the optimum path is the one where a cyclist covers the suitable distance, ascent, and descent based on his/her training parameters. This paper presents a Neo4j graph-based dataset of cycling routes in Slovenia. It consists of 152,659 nodes representing individual road intersections and 410,922 edges representing the roads between them. The dataset allows the researchers to develop and optimize cycling training generation algorithms, where distance, ascent, descent, and road type are considered.</description>
<size>238330049</size>
</item><item>
<title>Reddit subreddits metadata, rules and wikis 2025-01</title>
<category>Dataset</category>
<infohash>5d0bf258a025a5b802572ddc29cde89bf093185c</infohash>
<guid>https://academictorrents.com/details/5d0bf258a025a5b802572ddc29cde89bf093185c</guid>
<link>https://academictorrents.com/details/5d0bf258a025a5b802572ddc29cde89bf093185c</link>
<description>- subreddit about pages and metadata - includes description, subscriber count, nsfw flag, icon urls, and more - 22 million subreddits - subreddit metadata only - subreddits that could not be retrieved, but at some point appeared in the pushshift or arctic shift data dumps - metadata includes number of posts+comments and the date of the first post+comment - 1.6 million subreddits - subreddit rules - posting/commenting rules of subreddits that go beyond the site wide rules - 345k subreddits - subreddit wiki pages - wiki text contents of URLs that can be found in the pushshift or arctic shift data dumps - 323k pages Data was retrieved in January and February 2025. General documentation, APIs and other reddit related data can be found at https://github.com/ArthurHeitmann/arctic_shift JSON schemas specifically are at https://github.com/ArthurHeitmann/arctic_shift/tree/master/schemas/subreddits</description>
<size>2648464712</size>
</item><item>
<title>noaa-nexrad-level2-2022-to-2024</title>
<category>Dataset</category>
<infohash>f240a34952e6e9d270d4079dffd088d474edf367</infohash>
<guid>https://academictorrents.com/details/f240a34952e6e9d270d4079dffd088d474edf367</guid>
<link>https://academictorrents.com/details/f240a34952e6e9d270d4079dffd088d474edf367</link>
<description>A partial mirror of https://noaa-nexrad-level2.s3.amazonaws.com/ data is from 2022 to 2024 and has been filtered down to only include storm-related events, using https://scm.xan.host/nexrad-archive.git/ with default settings. the input CSV files are part of this torrent as well.</description>
<size>17245276488606</size>
</item><item>
<title>data-gv-at-2025-02</title>
<category>Dataset</category>
<infohash>eaf2077dc926b780389f8b12ad6c6b30039f0901</infohash>
<guid>https://academictorrents.com/details/eaf2077dc926b780389f8b12ad6c6b30039f0901</guid>
<link>https://academictorrents.com/details/eaf2077dc926b780389f8b12ad6c6b30039f0901</link>
<description>recursive copy of all datasets on data.gv.at. data captured around 2025-02-05. most large folders have been compressed with ZStandard most raw data is in ./05-dataset-resource-downloads, metadata is in ./03-dataset-pages for nextcloud and offenerhaushalt, the intermediate download pages have been dealt with, and their contents are in separate folders ./07-nextcloud-downloads, ./10-vrv97-offenerhaushalt, ./11-www-offenerhaushalt, ./12-offenerhaushalt TIFF files (geo data) have been moved to a separate folder due to their size.</description>
<size>6320720329023</size>
</item><item>
<title>www.fda.gov-guidance</title>
<category>Dataset</category>
<infohash>378d7c58cfc34ceec016b663e885adb50237deb1</infohash>
<guid>https://academictorrents.com/details/378d7c58cfc34ceec016b663e885adb50237deb1</guid>
<link>https://academictorrents.com/details/378d7c58cfc34ceec016b663e885adb50237deb1</link>
<description>A mirror of all FDA guidance documents as found on https://www.fda.gov/regulatory-information/search-fda-guidance-documents Data captured on 2025-02-16 steps.txt details how the data was obtained. The interesting documents can be found with  rg "Per a court order, HHS is required to restore this website"</description>
<size>656460789</size>
</item><item>
<title>Subreddit comments/submissions 2005-06 to 2024-12</title>
<category>Dataset</category>
<infohash>1614740ac8c94505e4ecb9d88be8bed7b6afddd4</infohash>
<guid>https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4</guid>
<link>https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4</link>
<description>This is the top 40,000 subreddits from reddit s history in separate files. You can use your torrent client to only download the subreddit s you re interested in. These are from the pushshift dumps from 2005-06 to 2024-12 which can be found here https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13 These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps If you have questions, please reply to this reddit post or DM u/Watchful on reddit or respond to this post https://www.reddit.com/r/pushshift/comments/1akrhg3/separate_dump_files_for_the_top_40k_subreddits/</description>
<size>3275329715321</size>
</item><item>
<title>consumerfinance.gov</title>
<category>Dataset</category>
<infohash>936297ea088331f08bc7cb3ac144d94ef8ce8dd8</infohash>
<guid>https://academictorrents.com/details/936297ea088331f08bc7cb3ac144d94ef8ce8dd8</guid>
<link>https://academictorrents.com/details/936297ea088331f08bc7cb3ac144d94ef8ce8dd8</link>
<description>A recursive, bare-bones mirror of https://www.consumerfinance.gov/data-research/research-reports/ Includes all pages (without CSS/images) and all PDF downloads Data captured on 2025-02-08</description>
<size>847912033</size>
</item><item>
<title>fbi-cde</title>
<category>Dataset</category>
<infohash>69c64f9ddfd4c86dfc944b60092b8ae8d13383d7</infohash>
<guid>https://academictorrents.com/details/69c64f9ddfd4c86dfc944b60092b8ae8d13383d7</guid>
<link>https://academictorrents.com/details/69c64f9ddfd4c86dfc944b60092b8ae8d13383d7</link>
<description>A mirror of all datasets from https://cde.ucr.cjis.gov/LATEST/webapp/#/pages/downloads Data captured on 2025-02-08 mirrored from https://archive.org/details/fbi-cde</description>
<size>54170442007</size>
</item><item>
<title>ftp.ncei.noaa.gov</title>
<category>Dataset</category>
<infohash>8010da927e6e65115c24d990b0e69986ef9ac202</infohash>
<guid>https://academictorrents.com/details/8010da927e6e65115c24d990b0e69986ef9ac202</guid>
<link>https://academictorrents.com/details/8010da927e6e65115c24d990b0e69986ef9ac202</link>
<description>A complete mirror of ftp://ftp.ncei.noaa.gov/pub/data/noaa/ Captured between 2025-02-07 and 2025-02-09 Data folders have been re-packed as ZStandard-compressed tarballs (.tar.zst), and the inner .gz files have been decompressed. This is to reduce the size of torrent metadata and to save disk space by compressing multiple consecutive files.</description>
<size>127795556226</size>
</item><item>
<title>par.nsf.gov</title>
<category>Dataset</category>
<infohash>efeb2ac6acf6dd596a6a2aecb390c85e310484f3</infohash>
<guid>https://academictorrents.com/details/efeb2ac6acf6dd596a6a2aecb390c85e310484f3</guid>
<link>https://academictorrents.com/details/efeb2ac6acf6dd596a6a2aecb390c85e310484f3</link>
<description>https://archive.org/details/par.nsf.gov A barebones mirror of https://par.nsf.gov/search/sort:publication_date%20desc &amp;mdash; the index and records  metadata, but not the actual papers search.tar.zst contains /search/ pages, biblio.tar.zst contains /biblio/ pages only HTML, no images or CSS captured on 2025-02-13</description>
<size>1363107189</size>
</item><item>
<title>nrel-pds-nsrdb-partial</title>
<category>Dataset</category>
<infohash>9210e4c2d6fc592175f9bd18ab538bb53e317d4d</infohash>
<guid>https://academictorrents.com/details/9210e4c2d6fc592175f9bd18ab538bb53e317d4d</guid>
<link>https://academictorrents.com/details/9210e4c2d6fc592175f9bd18ab538bb53e317d4d</link>
<description>Partial backup of https://data.openei.org/s3_viewer?bucket=nrel-pds-nsrdb&amp;prefix=v3%2F</description>
<size>17874437414493</size>
</item><item>
<title>Reddit comments/submissions 2025-01</title>
<category>Dataset</category>
<infohash>4fd14d4c3d792e0b1c5cf6b1d9516c48ba6c4a24</infohash>
<guid>https://academictorrents.com/details/4fd14d4c3d792e0b1c5cf6b1d9516c48ba6c4a24</guid>
<link>https://academictorrents.com/details/4fd14d4c3d792e0b1c5cf6b1d9516c48ba6c4a24</link>
<description>Reddit comments and submissions from 2025-01 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>56356486542</size>
</item><item>
<title>enwiki-20250201-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>e4e18bed26319b75fe5ff59bbd80a6c43542e83a</infohash>
<guid>https://academictorrents.com/details/e4e18bed26319b75fe5ff59bbd80a6c43542e83a</guid>
<link>https://academictorrents.com/details/e4e18bed26319b75fe5ff59bbd80a6c43542e83a</link>
<description>English Wikipedia Multistream 2025-02-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24642610651</size>
</item><item>
<title>DeepSeek-R1 model weights</title>
<category>Dataset</category>
<infohash>0b5d0030e27c3b24eaefe4b5622bfa0011f77fa3</infohash>
<guid>https://academictorrents.com/details/0b5d0030e27c3b24eaefe4b5622bfa0011f77fa3</guid>
<link>https://academictorrents.com/details/0b5d0030e27c3b24eaefe4b5622bfa0011f77fa3</link>
<description>Weights for DeepSeek-R1 from Huggingface We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. https://i.imgur.com/q6NKD6T.png ## License This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1. DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under llama3.1 license. DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under llama3.3 license.</description>
<size>688586727753</size>
</item><item>
<title>zhwiki-20250120-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>f33ffe8cf8aa90a334dc1b21a745b11aca3e50f1</infohash>
<guid>https://academictorrents.com/details/f33ffe8cf8aa90a334dc1b21a745b11aca3e50f1</guid>
<link>https://academictorrents.com/details/f33ffe8cf8aa90a334dc1b21a745b11aca3e50f1</link>
<description>Chinese Wikipedia Multistream 2025-01-20 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>3204172977</size>
</item><item>
<title>Why You've Got Mail: Evaluating Inbox Privacy Implications of Email Marketing Practices in Online Apps and Services</title>
<category>Dataset</category>
<infohash>ef3d9b9d6d7878d7406eed8e303871af5f235335</infohash>
<guid>https://academictorrents.com/details/ef3d9b9d6d7878d7406eed8e303871af5f235335</guid>
<link>https://academictorrents.com/details/ef3d9b9d6d7878d7406eed8e303871af5f235335</link>
<description>This study explores the widespread perception that personal data, such as email addresses, may be shared or sold without informed user consent, investigating whether these concerns are reflected in actual practices of popular online services and apps. Over the course of a year, we collected and analyzed the source, volume, frequency, and content of emails received by users after signing up for the 150 most popular online services and apps across various sectors. By examining patterns in email communications, we aim to identify consistent strategies used across industries, including potential signs of third-party data sharing. This analysis provides a critical evaluation of how email marketing tactics may intersect with data-sharing practices, with important implications for consumer privacy and regulatory oversight. Our study findings, conducted post-CCPA and GDPR, indicate that while no unknown third-party spam email was detected, internal and authorized third-party email marketing practices were pervasive, with companies frequently sending promotional and CRM emails despite opt-out preferences. The framework established in this work is designed to be scalable, allowing for continuous monitoring, and can be extended to include a more diverse set of apps and services for broader analysis, ultimately contributing to transparency in email address privacy practices.</description>
<size>163982178</size>
</item><item>
<title>Arion rufus snails dataset</title>
<category>Dataset</category>
<infohash>aca221de8a3917804fb8be301eeee05067966f3a</infohash>
<guid>https://academictorrents.com/details/aca221de8a3917804fb8be301eeee05067966f3a</guid>
<link>https://academictorrents.com/details/aca221de8a3917804fb8be301eeee05067966f3a</link>
<description/>
<size>200235236</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2024-12</title>
<category>Dataset</category>
<infohash>ba051999301b109eab37d16f027b3f49ade2de13</infohash>
<guid>https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</guid>
<link>https://academictorrents.com/details/ba051999301b109eab37d16f027b3f49ade2de13</link>
<description>Reddit comments and submissions from 2005-06 to 2024-12 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps The more recent dumps are collected by u/RaiderBDev</description>
<size>3124743990853</size>
</item><item>
<title>Stack Exchange Data Dump (2024-12-31)</title>
<category>Dataset</category>
<infohash>9132fa0997b430e863d3e053509a263c54ce12f3</infohash>
<guid>https://academictorrents.com/details/9132fa0997b430e863d3e053509a263c54ce12f3</guid>
<link>https://academictorrents.com/details/9132fa0997b430e863d3e053509a263c54ce12f3</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-12-31. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has been mirrored from https://archive.org/details/stackexchange_20241231</description>
<size>95466553344</size>
</item><item>
<title>MagnetDB: A Longitudinal Torrent Discovery Dataset with IMDb-Matched Movies and TV Shows</title>
<category>Dataset</category>
<infohash>1ce8202af7a500469177ed99de5cd9bf66078de0</infohash>
<guid>https://academictorrents.com/details/1ce8202af7a500469177ed99de5cd9bf66078de0</guid>
<link>https://academictorrents.com/details/1ce8202af7a500469177ed99de5cd9bf66078de0</link>
<description/>
<size>28235915867</size>
</item><item>
<title>Cloud Telescope Internet Background Radiation August 2023</title>
<category>Dataset</category>
<infohash>478e651ee303f794e4ff9b458a225578d694d097</infohash>
<guid>https://academictorrents.com/details/478e651ee303f794e4ff9b458a225578d694d097</guid>
<link>https://academictorrents.com/details/478e651ee303f794e4ff9b458a225578d694d097</link>
<description>This dataset results from a 47-day Cloud Telescope Internet Background Radiation collection experiment conducted during the months of August and September 2023. A total amount of 260 EC2 instances (sensors) were deployed across all the 26 commercially available AWS regions at the time, 10 sensors per region. A Cloud Telescope sensor does not serve information. All traffic arriving to the sensor is unsolicited, and potentially malicious. Sensors were configured to allow all unsolicited traffic. In this experiment, we implemented high-level responders on TCP ports 23 and 80, coded in rust, to record the commands issued by botnets such as Mirai when they attempt to infect IoT devices. All other TCP ports were configured to only respond to connection requests until three-way handshake completion. This should enable TCP connection state analysis (syn,fin,ack,...). The architecture is reproducible. Terraform Infrastructure-As-Code is available at: https://github.com/lucasbeiler/ibr-iac</description>
<size>462019362816</size>
</item><item>
<title>InsectSet459 (v1, TrainVal): A large dataset for automatic acoustic identification of insects (Orthoptera and Cicadidae)</title>
<category>Dataset</category>
<infohash>a8278b49a33cc05a23b14aebd2da75d693012678</infohash>
<guid>https://academictorrents.com/details/a8278b49a33cc05a23b14aebd2da75d693012678</guid>
<link>https://academictorrents.com/details/a8278b49a33cc05a23b14aebd2da75d693012678</link>
<description>In 2024, the public animal sound database xeno-canto has seen a dramatic increase in insect sound recordings. This is due to the publication of several large collections of field and laboratory recordings from insect sound experts, as well as increased adoption of citizen scientists uploading their insect sound observations to the website. We used this opportunity to expand our previously published datasets (InsectSet32, InsectSet47&amp;InsectSet66) to compile the first large-scale dataset of insect sounds that is easy to use for training deep learning methods to detect and classify insect sounds in the wild. </description>
<size>67715835010</size>
</item><item>
<title>Reddit comments/submissions 2024-12</title>
<category>Dataset</category>
<infohash>eb2017da9f63a49460dde21a4ebe3b7c517f3ad9</infohash>
<guid>https://academictorrents.com/details/eb2017da9f63a49460dde21a4ebe3b7c517f3ad9</guid>
<link>https://academictorrents.com/details/eb2017da9f63a49460dde21a4ebe3b7c517f3ad9</link>
<description>Reddit comments and submissions from 2024-12 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>51272335463</size>
</item><item>
<title>ISDB: In Silico Spectral Databases of Natural Products</title>
<category>Dataset</category>
<infohash>ab530b4e6493c7e7c5539b65cdbe84263d15b4e6</infohash>
<guid>https://academictorrents.com/details/ab530b4e6493c7e7c5539b65cdbe84263d15b4e6</guid>
<link>https://academictorrents.com/details/ab530b4e6493c7e7c5539b65cdbe84263d15b4e6</link>
<description>An In Silico spectral DataBase (ISDB) of natural products calculated from structures aggregated in the frame of the LOTUS Initiative (https://doi.org/10.7554/eLife.70780). Fragmented using cfm-predict 4 (https://doi.org/10.1021/acs.analchem.1c01465) . In silico spectral database preparation and use for dereplication initially described in Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication https://doi.org/10.1021/ACS.ANALCHEM.5B04804 See https://github.com/mandelbrot-project/spectral_lib_builder for associated building scripts. See https://github.com/mandelbrot-project/spectral_lib_matcher for associated matching scripts. The pickle formated ISDBs are build for quicker loading via matchms.</description>
<size>3421172513</size>
</item><item>
<title>The LOTUS Initiative for Open Natural Products Research: frozen dataset union wikidata (with metadata)</title>
<category>Dataset</category>
<infohash>a6e010f66bb06dfe59ccde2a1fe79565d389340c</infohash>
<guid>https://academictorrents.com/details/a6e010f66bb06dfe59ccde2a1fe79565d389340c</guid>
<link>https://academictorrents.com/details/a6e010f66bb06dfe59ccde2a1fe79565d389340c</link>
<description>Dataset present on Wikidata used in the frame of the LOTUS Initiative: https://doi.org/10.7554/eLife.70780</description>
<size>108186516</size>
</item><item>
<title>MSV000092400</title>
<category>Dataset</category>
<infohash>11093a9450b26514f34bc24f1f41c759d2cfac23</infohash>
<guid>https://academictorrents.com/details/11093a9450b26514f34bc24f1f41c759d2cfac23</guid>
<link>https://academictorrents.com/details/11093a9450b26514f34bc24f1f41c759d2cfac23</link>
<description/>
<size>524346582</size>
</item><item>
<title>[udemy] Level 1 CFA® Exam Prep Bootcamp (Part 2/2)</title>
<category>Course</category>
<infohash>666767aa082390e6a10035e9bb25f9d09af9dcaa</infohash>
<guid>https://academictorrents.com/details/666767aa082390e6a10035e9bb25f9d09af9dcaa</guid>
<link>https://academictorrents.com/details/666767aa082390e6a10035e9bb25f9d09af9dcaa</link>
<description>Official Course URL: udemy.com/course/exam-prep-cfa-level-1-bootcamp-2020-curriculum-part-22/ Course Overview: This advanced bootcamp complements Part 1 by covering critical CFA® Level 1 exam topics such as financial reporting, portfolio management, equity investments, fixed income, and derivatives. Designed to build a comprehensive foundation, this course is ideal for candidates aiming for excellence. What You ll Learn: - Financial Reporting: Master income statements, balance sheets, and cash flow analysis. - Portfolio Management: Understand risk management, portfolio construction, and fintech innovations. - Equity Investments: Explore equity valuation, securities, and market efficiency. - Fixed Income: Learn credit analysis, risk evaluation, and fixed-income valuation techniques. - Derivatives: Gain insights into derivative instruments, pricing, and valuation. Course Benefits: - Comprehensive Curriculum: Aligned with CFA® Level 1 exam requirements. - Real-World Examples: Study through case studies and practical tutorials. - Dynamic Instruction: Engaging lessons with a fast-paced delivery. - Expert Guidance: Learn from seasoned finance professionals with years of experience. Total Hours of Course: 29 hours 33 minutes Course Size: 3.67 GB Subtitles: English, Persian Who is this course for? Ideal for CFA® Level 1 candidates, finance professionals, and anyone looking to excel in investment management and financial analysis.</description>
<size>3945479239</size>
</item><item>
<title>[udemy] Level 1 CFA® Exam Prep Bootcamp (Part 1/2)</title>
<category>Course</category>
<infohash>28d666f4c1b12c1727ce5c2ce5131fb37abf9284</infohash>
<guid>https://academictorrents.com/details/28d666f4c1b12c1727ce5c2ce5131fb37abf9284</guid>
<link>https://academictorrents.com/details/28d666f4c1b12c1727ce5c2ce5131fb37abf9284</link>
<description>Official Course URL: udemy.com/course/level-1-cfa-exam-prep-bootcamp/ Course Overview: This bootcamp prepares you for the CFA® Level 1 Exam by covering core topics like ethics, quantitative methods, corporate finance, economics, and alternative investments. Learn concepts step-by-step with expert instruction and real-world examples designed to build a strong foundation for exam success. What You ll Learn: - Ethics: Master the Code of Ethics, Standards of Professional Conduct, and GIPS. - Quantitative Methods: Understand time value of money, probability, hypothesis testing, and more. - Corporate Finance: Gain insights into corporate governance, budgeting, and working capital management. - Economics: Study demand/supply, business cycles, exchange rates, and trade. - Alternative Investments: Explore investment categories, benefits, and valuation methods. Course Benefits: - Beginner-Friendly: Designed for CFA® Level 1 candidates or finance beginners. - Comprehensive Coverage: All topics aligned with CFA® curriculum. - Real-World Insights: Learn from seasoned finance professionals. - Study Aid: Supplement official study materials with engaging lessons and exercises. Total Hours of Course: 26 hours 3 minutes Course Size: 3.22 GB Subtitles: English, Persian Who is this course for? Ideal for CFA® Level 1 candidates, finance enthusiasts, or anyone aiming to excel in investment management, corporate finance, or economics.</description>
<size>3466427634</size>
</item><item>
<title>[udemy] Build REST APIs with Django REST Framework and Python</title>
<category>Course</category>
<infohash>204e1d43682977ee3eb0034d92e2fad1afa6c98f</infohash>
<guid>https://academictorrents.com/details/204e1d43682977ee3eb0034d92e2fad1afa6c98f</guid>
<link>https://academictorrents.com/details/204e1d43682977ee3eb0034d92e2fad1afa6c98f</link>
<description>Official Course URL: udemy.com/course/django-rest-framework/ Course Overview: This comprehensive course will guide you from basics to advanced concepts of Django REST Framework (DRF), enabling you to build professional-grade REST APIs. Through hands-on projects, including an IMDB API clone, you ll master essential DRF concepts and tools. What You ll Learn: - API Basics: Understand REST API fundamentals and DRF s core functionality. - CRUD Operations: Build and manage resources effectively. - Advanced DRF Concepts: Implement permissions, throttling, pagination, and filtering. - Hands-On Projects: Create an IMDB API clone and practice API testing with Postman. Course Benefits: - Beginner-Friendly: Start with fundamental API concepts, no advanced experience needed. - Real-World Applications: Build APIs for web and mobile app backends. - Professional Skills: Master authentication methods like Token and JWT. - Complete DRF Mastery: Gain in-depth knowledge through official documentation and examples. Total Hours of Course: 12 hours 58 minutes Course Size: 1.67 GB Subtitles: English, Persian Who is this course for? This course is ideal for Django developers looking to expand their expertise into API development and build robust backend systems with Django REST Framework.</description>
<size>1801903697</size>
</item><item>
<title>[udemy] Adobe Premiere Pro CC Masterclass: Video Editing in Premiere</title>
<category>Course</category>
<infohash>1e6b74db6defaac37ef293d9c7ad667ea2ab87d9</infohash>
<guid>https://academictorrents.com/details/1e6b74db6defaac37ef293d9c7ad667ea2ab87d9</guid>
<link>https://academictorrents.com/details/1e6b74db6defaac37ef293d9c7ad667ea2ab87d9</link>
<description>Official Course URL: udemy.com/course/adobe-premiere-pro-video-editing/ Course Overview: This comprehensive course is designed to transform you into a confident video editor with Adobe Premiere Pro CC. From basic editing techniques to advanced effects, motion graphics, and audio editing, you ll master all aspects of video production through hands-on practice with included project files. What You ll Learn: - Editing Basics: Start a project, add transitions, and edit audio and video seamlessly. - Color Grading: Correct and grade videos to enhance their aesthetic appeal. - Motion Graphics: Create dynamic titles, motion effects, and overlays. - Green Screen &amp; Effects: Edit chroma key footage and apply visual effects. Course Benefits: - Beginner-Friendly: Ideal for creators with no prior experience in Premiere Pro. - Advanced Techniques: Learn efficiency tips and advanced editing workflows. - Hands-On Learning: Practice with real-world projects using supplied video and audio clips. - Career Growth: Build confidence and skills to start editing professionally. Total Hours of Course: 25 hours 43 minutes Course Size: 6.15 GB Subtitles: English, Persian Who is this course for? This course is perfect for beginners, transitioning editors, or anyone aiming to master Adobe Premiere Pro for professional or personal video projects.</description>
<size>6604648012</size>
</item><item>
<title>[udemy] Laravel 11 - From Basics to Advance (2024)</title>
<category>Course</category>
<infohash>3092d8acc83c2587c2ca969e526047b352045892</infohash>
<guid>https://academictorrents.com/details/3092d8acc83c2587c2ca969e526047b352045892</guid>
<link>https://academictorrents.com/details/3092d8acc83c2587c2ca969e526047b352045892</link>
<description>Official Course URL: udemy.com/course/laravel-basics-to-advance/ Course Overview: Master Laravel 11, the robust and elegant PHP framework, with this all-encompassing course tailored for beginners and developers aiming to enhance their skills. From routing and database management to advanced topics like APIs, authentication, and real-time projects, this course equips you to create professional, scalable web applications. What You ll Learn: - Laravel Essentials: Understand routing, controllers, blade templates, and more. - Database Management: Leverage Eloquent ORM and Query Builder for efficient data handling. - Advanced Techniques: Authentication, authorization, middleware, and background processing. - Hands-On Projects: Build real-world projects like an Ecommerce Cart, Messenger, and Google Keep Clone. Course Benefits: - Beginner-Friendly: Start with fundamentals, no advanced knowledge required. - Real-World Applications: Work on projects that mimic professional scenarios. - Career Growth: Gain skills that make you a valuable asset to web development teams. - Lifetime Access: Revisit course materials anytime with lifetime access. Total Hours of Course: 51 hours 44 minutes Course Size: 6.69 GB Subtitles: English, Persian Who is this course for? This course is ideal for beginners in web development, aspiring Laravel developers, PHP programmers, and professionals aiming to create scalable, secure web applications with Laravel.</description>
<size>7186917046</size>
</item><item>
<title>[udemy] The Complete Adobe After Effects Bootcamp: Basic to Advanced</title>
<category>Course</category>
<infohash>9bfd535befddfe73d2a4f548fddca042922dba22</infohash>
<guid>https://academictorrents.com/details/9bfd535befddfe73d2a4f548fddca042922dba22</guid>
<link>https://academictorrents.com/details/9bfd535befddfe73d2a4f548fddca042922dba22</link>
<description>Official Course URL: udemy.com/course/after-effects-cc-bootcamp/ Course Overview: Master Adobe After Effects CC with this comprehensive bootcamp, covering Motion Graphics, Visual Effects, and VFX Compositing. Engage in over 55 real-world projects, from beginner to advanced levels, to unleash your creativity and build professional-grade animations and video effects. What You ll Learn: - Motion Graphics Mastery: Create stunning designs and animations using advanced techniques. - Visual Effects Skills: Grasp motion tracking, chroma keying, rotoscoping, and camera tracking. - 3D Animation: Work with 3D cameras, lights, and shadows for immersive motion graphics. - Project-Based Learning: Complete 55+ practical projects, from basics to complex effects. Course Benefits: - Beginner-Friendly: Start with the fundamentals, no prior experience required. - Industry-Relevant Skills: Gain expertise for freelancing or creating professional-grade videos. - Creative Freedom: Learn to design dynamic titles, lower thirds, and engaging infographics. - Real-World Applications: Build projects tailored for YouTube, Vimeo, and other platforms. Total Hours of Course: 35 hours 5 minutes Course Size: 6.85 GB Subtitles: English, Persian Who is this course for? This course is perfect for beginners, YouTube creators, video editors, and motion graphics designers eager to master After Effects and enhance their video production skills.</description>
<size>7363260180</size>
</item><item>
<title>[udemy] Complete Manual Software Testing + Agile + Scrum + Jira 2024</title>
<category>Course</category>
<infohash>cbec20f4ade504701c1bd308913cc5df10eb4d6b</infohash>
<guid>https://academictorrents.com/details/cbec20f4ade504701c1bd308913cc5df10eb4d6b</guid>
<link>https://academictorrents.com/details/cbec20f4ade504701c1bd308913cc5df10eb4d6b</link>
<description>Official Course URL: udemy.com/course/manual-software-testing-with-bugreporting-tool-almqc/ Course Overview: Master the fundamentals of manual software testing with this practical course, designed for beginners and non-IT professionals. From writing test cases to bug reporting using tools like Jira and ALM, this course provides hands-on experience in key testing methodologies, Agile principles, and Scrum practices. What You ll Learn: - Manual Testing Basics: Understand software life cycles, test cases, and defect reporting. - Agile &amp; Scrum: Grasp Scrum principles and Agile methodologies in software development. - Bug Reporting Tools: Hands-on experience with Jira, ALM/QC, and Bugzilla for tracking defects. - Test Management: Develop test data, write functional test cases, and optimize testing processes. Course Benefits: - Beginner-Friendly: Designed for non-IT professionals and career switchers. - Practical Approach: Focus on real-world testing scenarios and tools. - Comprehensive Coverage: Includes Agile, Scrum, bug reporting, and manual testing techniques. - Career-Oriented: Provides skills essential for QA roles in software testing. Total Hours of Course: 9 hours 15 minutes Course Size: 1.11 GB Subtitles: English, Persian Who is this course for? This course is ideal for beginners, non-IT professionals, and anyone interested in starting a career in software testing.</description>
<size>1193112913</size>
</item><item>
<title>[udemy] ACCA Audit and Assurance (F8) 2021 Past Paper Complete Guide</title>
<category>Course</category>
<infohash>ae85939aeb4854acdbd9e685d0833e0a4acf9a20</infohash>
<guid>https://academictorrents.com/details/ae85939aeb4854acdbd9e685d0833e0a4acf9a20</guid>
<link>https://academictorrents.com/details/ae85939aeb4854acdbd9e685d0833e0a4acf9a20</link>
<description>Official Course URL: udemy.com/course/acca-audit-and-assurance-f8-2021-past-paper-complete-guide/ Course Overview: This course provides ACCA students with an in-depth analysis of the September and December 2021 Audit and Assurance (AA/F8) past papers. Led by expert tutor James Wright, the course offers a complete breakdown of the ACCA AA exam scenarios, tutor insights, and essential exam techniques to help you pass with confidence. Gain access to exclusive notes, tips, and expert advice to master your exam preparation. What You ll Learn: - Exam Techniques: Learn proven methods to approach ACCA AA (F8) scenario-based questions. - Key Topics Covered: Audit risks, fraud detection, ethical threats, control limitations, and Key Audit Matters (KAM). - Detailed Analysis: Review 2021 ACCA past papers with expert insights and answer explanations. - Practical Insights: Gain knowledge from a qualified ACCA member with teaching experience. Course Benefits: - Exclusive Notes: Follow along with unique resources to enhance your revision. - ACCA Insights: Learn directly from a qualified ACCA tutor and lecturer. - Effective Preparation: Build confidence for your ACCA AA (F8) exam. - High-Quality Resources: Access ACCA-approved materials for exam success. Total Hours of Course: 2 hours 39 minutes Course Size: 484.2 MB Subtitles: English, Persian Who is this course for? This course is ideal for ACCA students preparing for their Audit and Assurance (AA/F8) exam, seeking expert advice and proven strategies to pass.</description>
<size>507793530</size>
</item><item>
<title>[udemy] Master of Essential C++ Programming: Beginner to Advanced</title>
<category>Course</category>
<infohash>8bd786e2ee84c05cd650e56f7f6232e73d66756a</infohash>
<guid>https://academictorrents.com/details/8bd786e2ee84c05cd650e56f7f6232e73d66756a</guid>
<link>https://academictorrents.com/details/8bd786e2ee84c05cd650e56f7f6232e73d66756a</link>
<description>Official Course URL: udemy.com/course/master-of-essential-c-programming-beginner-to-advanced/ Course Overview: Step into the world of programming with this beginner-friendly course on C++ programming. Designed for complete novices and intermediate learners, this course provides a hands-on approach to mastering C++ fundamentals, core programming concepts, and object-oriented principles. Through practical exercises and real-world examples, you ll gain confidence in writing structured and optimized C++ programs. What You ll Learn: - Core Concepts: Grasp variables, data types, operators, and control flow. - Programming Basics: Learn functions, modules, arrays, pointers, and strings. - Object-Oriented Principles: Dive into classes, objects, inheritance, and polymorphism. - Real-World Applications: Build practical C++ programs through engaging projects. Course Benefits: - Beginner-Friendly: No prior programming knowledge required. - Hands-On Learning: Apply concepts through exercises and projects. - Career Preparation: Build a strong foundation for software development roles. - Accessible Teaching: Easy-to-follow content for all levels. Total Hours of Course: 5 hours 44 minutes Course Size: 629.9 MB Subtitles: English, Persian Who is this course for? This course is ideal for high school or college students, aspiring software developers, and anyone looking to start their journey in C++ programming.</description>
<size>660557244</size>
</item><item>
<title>[udemy] SQL Masterclass for Financial Analysis &amp; Financial Reporting</title>
<category>Course</category>
<infohash>161282939abe2462e37cd8a59664043716a1a529</infohash>
<guid>https://academictorrents.com/details/161282939abe2462e37cd8a59664043716a1a529</guid>
<link>https://academictorrents.com/details/161282939abe2462e37cd8a59664043716a1a529</link>
<description>Official Course URL: udemy.com/course/sql-for-financial-data-analysis/ Course Overview: Unlock the power of SQL for financial data analysis and reporting. This course is tailored for non-tech professionals who want to streamline their analytics and reporting capabilities. Learn to extract and process financial data, prepare detailed reports like Profit &amp; Loss Statements and Balance Sheets, and calculate critical financial ratios through practical exercises. What You ll Learn: - SQL Basics: Master database querying techniques for financial data. - Report Preparation: Create Profit &amp; Loss Statements, Balance Sheets, and Cash Flow Statements. - Key Analytics: Calculate and interpret profitability, efficiency, and liquidity ratios. - Database Skills: Gain hands-on experience without prior technical expertise. Course Benefits: - Practical Applications: Apply SQL to real-world financial scenarios. - Independent Reporting: Reduce reliance on system-generated reports. - Career Advancement: Enhance your data analysis and reporting skills. - Expert Guidance: Learn from an experienced Chartered Accountant. Total Hours of Course: 5 hours 29 minutes Course Size: 701 MB Subtitles: English, Persian Who is this course for? This course is ideal for Financial Analysts, Business Analysts, Accountants, Bookkeepers, and Finance Professionals looking to enhance their data analysis capabilities using SQL.</description>
<size>735121934</size>
</item><item>
<title>ORFD: A Dataset and Benchmark for Off-Road Freespace Detection</title>
<category>Dataset</category>
<infohash>ec5ccf4b8e49271ee3b63660383facf43063f2f2</infohash>
<guid>https://academictorrents.com/details/ec5ccf4b8e49271ee3b63660383facf43063f2f2</guid>
<link>https://academictorrents.com/details/ec5ccf4b8e49271ee3b63660383facf43063f2f2</link>
<description>Freespace detection is an essential component of autonomous driving technology and plays an important role in trajectory planning. In the last decade, deep learning based freespace detection methods have been proved feasible. However, these efforts were focused on urban road environments and few deep learning based methods were specifically designed for off-road freespace detection due to the lack of off-road dataset and benchmark. In this paper, we present the ORFD dataset, which, to our knowledge, is the first off-road freespace detection dataset. The dataset was collected in different scenes (woodland, farmland, grassland and countryside), different weather conditions (sunny, rainy, foggy and snowy) and different light conditions (bright light, daylight, twilight, darkness), which totally contains 12,198 LiDAR point cloud and RGB image pairs with the traversable area, non-traversable area and unreachable area annotated in detail. We propose a novel network named OFF-Net, which unifies Transformer architecture to aggregate local and global information, to meet the requirement of large receptive fields for freespace detection task. We also propose the cross-attention to dynamically fuse LiDAR and RGB image information for accurate off-road freespace detection. Dataset and code are publicly available at https://github.com/chaytonmin/OFF-Net.</description>
<size>45941056780</size>
</item><item>
<title>RELLIS-3D Dataset: Data, Benchmarks and Analysis</title>
<category>Dataset</category>
<infohash>4cfa80e6d91e8c6c79bcc2f405dbd9255b5cf4e8</infohash>
<guid>https://academictorrents.com/details/4cfa80e6d91e8c6c79bcc2f405dbd9255b5cf4e8</guid>
<link>https://academictorrents.com/details/4cfa80e6d91e8c6c79bcc2f405dbd9255b5cf4e8</link>
<description>Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A&amp;M University, and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state of the art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D is available at https://github.com/unmannedlab/RELLIS-3D</description>
<size>635808536706</size>
</item><item>
<title>Learning Self-Supervised Traversability With Navigation Experiences of Mobile Robots: A Risk-Aware Self-Training Approach</title>
<category>Dataset</category>
<infohash>f81ba64cfd597b0185d5d66ffe74a75b6ee0a80d</infohash>
<guid>https://academictorrents.com/details/f81ba64cfd597b0185d5d66ffe74a75b6ee0a80d</guid>
<link>https://academictorrents.com/details/f81ba64cfd597b0185d5d66ffe74a75b6ee0a80d</link>
<description>Mobile robots operating in outdoor environments face the challenge of navigating various terrains with different degrees of difficulty. Therefore, traversability estimation is crucial for safe and efficient robot navigation. Current approaches utilize a robot s driving experience to learn traversability in a self-supervised fashion. However, providing sufficient and diverse experience to the robot is difficult in many practical applications. In this paper, we propose a self-supervised traversability learning method that adapts to challenging terrains with limited prior experience. One key aspect is to enable prioritized learning of scarce yet high-risk terrains by using a risk-sensitive approach. To this end, we train a neural network through a risk-aware instance weighting scheme. Another key aspect is to leverage traversability pseudo-labels on the basis of a self-training scheme. The proposed confidence-regularized self-training generates high-quality pseudo-labels, thereby achieving reliable data augmentation for unexperienced terrains. The effectiveness of the proposed method is verified in extensive real-world experiments, ranging from structured urban environments to complex rugged terrains.</description>
<size>82687724241</size>
</item><item>
<title>The GOOSE Dataset for Perception in Unstructured Environments</title>
<category>Dataset</category>
<infohash>fcbfe2be74bf9e2de1197c19046d5633a416925c</infohash>
<guid>https://academictorrents.com/details/fcbfe2be74bf9e2de1197c19046d5633a416925c</guid>
<link>https://academictorrents.com/details/fcbfe2be74bf9e2de1197c19046d5633a416925c</link>
<description>The potential for deploying autonomous systems can be significantly increased by improving the perception and interpretation of the environment. However, the development of deep learning-based techniques for autonomous systems in unstructured outdoor environments poses challenges due to limited data availability for training and testing. To address this gap, we present the German Outdoor and Offroad Dataset (GOOSE), a comprehensive dataset specifically designed for unstructured outdoor environments. The GOOSE dataset incorporates 10000 labeled pairs of images and point clouds, which are utilized to train a range of state-of-the-art segmentation models on both image and point cloud data. We open source the dataset, along with an ontology for unstructured terrain, as well as dataset standards and guidelines. This initiative aims to establish a common framework, enabling the seamless inclusion of existing datasets and a fast way to enhance the perception capabilities of various robots operating in unstructured environments. This framework also makes it possible to query data for specific weather conditions or sensor setups from a database in future. The dataset, pre-trained models for offroad perception, and additional documentation can be found at https://goose-dataset.de/.</description>
<size>1788663254417</size>
</item><item>
<title>Excavating in the Wild: The GOOSE-Ex Dataset for Semantic Segmentation</title>
<category>Dataset</category>
<infohash>e58406cf3c21e608dc0d8c0be63387eab77040dc</infohash>
<guid>https://academictorrents.com/details/e58406cf3c21e608dc0d8c0be63387eab77040dc</guid>
<link>https://academictorrents.com/details/e58406cf3c21e608dc0d8c0be63387eab77040dc</link>
<description>The successful deployment of deep learning-based techniques for autonomous systems is highly dependent on the data availability for the respective system in its deployment environment. Especially for unstructured outdoor environments, very few datasets exist for even fewer robotic platforms and scenarios. In an earlier work, we presented the German Outdoor and Offroad Dataset (GOOSE) framework along with 10000 multimodal frames from an offroad vehicle to enhance the perception capabilities in unstructured environments. In this work, we address the generalizability of the GOOSE framework. To accomplish this, we open-source the GOOSE-Ex dataset, which contains additional 5000 labeled multimodal frames from various completely different environments, recorded on a robotic excavator and a quadruped platform. We perform a comprehensive analysis of the semantic segmentation performance on different platforms and sensor modalities in unseen environments. In addition, we demonstrate how the combined datasets can be utilized for different downstream applications or competitions such as offroad navigation, object manipulation or scene completion. The dataset, its platform documentation and pre-trained state-of-the-art models for offroad perception will be made available on https://goose-dataset.de/.</description>
<size>38327770720</size>
</item><item>
<title>The GOOSE Dataset for Perception in Unstructured Environments</title>
<category>Dataset</category>
<infohash>2cb6b500ade20906534793937963664fa46e349d</infohash>
<guid>https://academictorrents.com/details/2cb6b500ade20906534793937963664fa46e349d</guid>
<link>https://academictorrents.com/details/2cb6b500ade20906534793937963664fa46e349d</link>
<description>The potential for deploying autonomous systems can be significantly increased by improving the perception and interpretation of the environment. However, the development of deep learning-based techniques for autonomous systems in unstructured outdoor environments poses challenges due to limited data availability for training and testing. To address this gap, we present the German Outdoor and Offroad Dataset (GOOSE), a comprehensive dataset specifically designed for unstructured outdoor environments. The GOOSE dataset incorporates 10000 labeled pairs of images and point clouds, which are utilized to train a range of state-of-the-art segmentation models on both image and point cloud data. We open source the dataset, along with an ontology for unstructured terrain, as well as dataset standards and guidelines. This initiative aims to establish a common framework, enabling the seamless inclusion of existing datasets and a fast way to enhance the perception capabilities of various robots operating in unstructured environments. This framework also makes it possible to query data for specific weather conditions or sensor setups from a database in future. The dataset, pre-trained models for offroad perception, and additional documentation can be found at https://goose-dataset.de/.</description>
<size>64605730294</size>
</item><item>
<title>DiTer: Diverse Terrain and Multi-Modal Dataset for Field Robot Navigation in Outdoor Environments</title>
<category>Dataset</category>
<infohash>c604afddbe24fff0b873acbed370ff89df0c1614</infohash>
<guid>https://academictorrents.com/details/c604afddbe24fff0b873acbed370ff89df0c1614</guid>
<link>https://academictorrents.com/details/c604afddbe24fff0b873acbed370ff89df0c1614</link>
<description>Field robots require autonomy in diverse environments to navigate and map their surround-ings efficiently. However, the lack of diverse and comprehensive datasets hinders the evaluation and development of autonomous field robots. To address this challenge, we present a multimodal, multisession, and diverse terrain dataset for the ground mapping of field robots. First of all, we utilize a quadrupedal robot as a base platform to collect the dataset. Also, the dataset includes various terrain types, such as sandy roads, vegetation, and sloping terrain. It comprises RGB-D camera for ground, RGB camera, thermal camera, light detection and ranging (LiDAR), inertial measurement unit (IMU), and global positioning system (GPS). In addition, we provide not only the reference trajectories of each dataset but also the global map by leveraging LiDAR-based simultaneous localization and mapping (SLAM) algorithms. Also, we assess our dataset from a terrain perspective and generate the fusion maps, such as thermal-LiDAR and RGB-LiDAR maps to exploit the information beyond the visible spectrum.</description>
<size>755092256494</size>
</item><item>
<title>Challenging data sets for point cloud registration algorithms</title>
<category>Dataset</category>
<infohash>1d1cc1cfe11684210e8b0a2a218e2ecb72c12d29</infohash>
<guid>https://academictorrents.com/details/1d1cc1cfe11684210e8b0a2a218e2ecb72c12d29</guid>
<link>https://academictorrents.com/details/1d1cc1cfe11684210e8b0a2a218e2ecb72c12d29</link>
<description>The number of registration solutions in the literature has bloomed recently. The iterative closest point, for example, could be considered as the backbone of many laser-based localization and mapping systems. Although they are widely used, it is a common challenge to compare registration solutions on a fair base. The main limitation is to overcome the lack of accurate ground truth in current data sets, which usually cover environments only over a small range of organization levels. In computer vision, the Stanford 3D Scanning Repository pushed forward point cloud registration algorithms and object modeling fields by providing high-quality scanned objects with precise localization. We aim to provide similar high-caliber working material to the robotic and computer vision communities but with sceneries instead of objects. We propose eight point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area. The core of the data sets consists of 3D laser point clouds for which supporting data (Gravity, Magnetic North and GPS) are given for each pose. A special effort has been made to ensure global positioning of the scanner within mm-range precision, independent of environmental conditions. This will allow for the development of improved registration algorithms when mapping challenging environments, such as those found in real-world situations.</description>
<size>38956921644</size>
</item><item>
<title>[udemy] Selenium WebDriver 4, Cucumber BDD, Java &amp; More! [NEW: 2023]</title>
<category>Course</category>
<infohash>68b6424c6c1c4eb765334f7b6366160f9a96f245</infohash>
<guid>https://academictorrents.com/details/68b6424c6c1c4eb765334f7b6366160f9a96f245</guid>
<link>https://academictorrents.com/details/68b6424c6c1c4eb765334f7b6366160f9a96f245</link>
<description>Official Course URL: udemy.com/course/cucumber-bdd-selenium-java-complete-automation-course/ Course Overview: Master automation testing with this comprehensive bootcamp covering Selenium WebDriver 4, Java, Cucumber BDD, TestNG, and Jenkins. Learn to build real-world frameworks, tackle complex scenarios, and gain hands-on experience with cutting-edge tools and techniques used in enterprise environments. What You ll Learn: - Selenium &amp; Java: From fundamentals to advanced concepts in test automation. - Cucumber BDD: Master requirement capturing and framework development. - Framework Design: Build robust, scalable automation frameworks. - Best Practices: Explore Page Object Models, Jenkins CI, and Maven integration. Course Benefits: - Comprehensive Training: Covers everything from basics to advanced concepts. - Industry-Ready Skills: Gain expertise in tools highly demanded by employers. - Hands-On Projects: Practice automation testing with real-world websites. - Practical Resources: Course notes, sample code, and practice projects included. Total Hours of Course: 10 hours 5 minutes Course Size: 1.49 GB Subtitles: English, Persian Who is this course for? This course is perfect for manual testers, QA engineers, aspiring SDETs, and automation testers looking to master Selenium and Cucumber BDD frameworks.</description>
<size>1602442262</size>
</item><item>
<title>[udemy] Learn Git Essentials [2024]</title>
<category>Course</category>
<infohash>7fe8b6d871e0a43fd4312c6ef0fe7bf2b485f766</infohash>
<guid>https://academictorrents.com/details/7fe8b6d871e0a43fd4312c6ef0fe7bf2b485f766</guid>
<link>https://academictorrents.com/details/7fe8b6d871e0a43fd4312c6ef0fe7bf2b485f766</link>
<description>Official Course URL: udemy.com/course/learn-git-essentials-2024/ Course Overview: Unlock the potential of Git with this beginner-to-advanced course designed for developers. "Learn Git Essentials [2024]" provides an engaging guide to mastering version control for efficient coding and collaboration. With 39 lessons, it offers hands-on experience with Git commands, workflows, and advanced techniques to streamline development. What You ll Learn: - Git Basics: Set up Git, understand core concepts, and practice essential commands. - Branching &amp; Merging: Manage branches and resolve merge conflicts like a pro. - Remote Repos: Work with GitHub, GitLab, and Bitbucket for collaborative projects. - Advanced Tools: Learn rebasing, stashing, Git hooks, and best practices for efficiency. Course Benefits: - Beginner-Friendly: Start from scratch with no prior Git knowledge required. - Hands-On Demos: Practice real-world workflows with guided exercises. - Collaborative Skills: Master Git for teamwork and version control. - Comprehensive Content: Gain advanced Git techniques to optimize your projects. Total Hours of Course: 4 hours 30 minutes Course Size: 606.8 MB Subtitles: English, Persian Who is this course for? This course is perfect for aspiring developers and software engineers looking to master Git for version control and collaboration.</description>
<size>636380000</size>
</item><item>
<title>[udemy] The Complete Hands-on Introduction to Airbyte</title>
<category>Course</category>
<infohash>2171960ee66c07fd5f251eac81268ca0a40723b5</infohash>
<guid>https://academictorrents.com/details/2171960ee66c07fd5f251eac81268ca0a40723b5</guid>
<link>https://academictorrents.com/details/2171960ee66c07fd5f251eac81268ca0a40723b5</link>
<description>Official Course URL: udemy.com/course/the-complete-hands-on-introduction-to-airbyte/ Course Overview: Master Airbyte, the powerful open-source data integration tool, in this beginner-friendly course by Marc Lamberti. Learn to consolidate data from various sources into your data warehouses or databases seamlessly. Discover how to use Airbyte alongside tools like Apache Airflow, dbt, and Snowflake to build robust pipelines. With 56 practical lessons, this course equips you to run efficient data synchronizations, set up notifications, and manage data pipelines effectively. What You ll Learn: - Airbyte Essentials: Understand its architecture, features, and role in data integration. - Hands-On Setup: Install and configure Airbyte locally with Docker and Kubernetes. - Data Integration: Connect Airbyte to multiple sources and destinations for seamless syncs. - Pipeline Building: Create end-to-end pipelines with dbt, Airflow, Postgres, Snowflake, and more. - Best Practices: Optimize workflows, set up monitoring, and manage notifications efficiently. Course Benefits: - Beginner-Friendly Approach: Perfect for those starting in data integration and pipeline building. - Extensive Toolset: Learn how to use Airbyte with complementary tools like dbt and Snowflake. - Practical Learning: Apply knowledge through hands-on exercises and a real-world project. - Lifetime Access: Revisit content anytime with lifetime course access. Total Hours of Course: 3 hours 17 minutes Course Size: 480.4 MB Subtitles: English, Persian Who is this course for? This course is designed for Data Engineers, Analytics Engineers, and Data Architects looking to integrate Airbyte into their workflows efficiently.</description>
<size>503805000</size>
</item><item>
<title>MIT OCW 6.100L Introduction to CS and Programming using Python (Fall 2022)</title>
<category>Course</category>
<infohash>fb014a1ffea0158f6104c3f51cd1e7724596bcc9</infohash>
<guid>https://academictorrents.com/details/fb014a1ffea0158f6104c3f51cd1e7724596bcc9</guid>
<link>https://academictorrents.com/details/fb014a1ffea0158f6104c3f51cd1e7724596bcc9</link>
<description>https://ocw.mit.edu/courses/6-100l-introduction-to-cs-and-programming-using-python-fall-2022/</description>
<size>5974263133</size>
</item><item>
<title>MIT The Missing Semester of Your CS Education (2020)</title>
<category>Course</category>
<infohash>fe4cc754cddc2a14d3f25ce95e37f02fb051b8a4</infohash>
<guid>https://academictorrents.com/details/fe4cc754cddc2a14d3f25ce95e37f02fb051b8a4</guid>
<link>https://academictorrents.com/details/fe4cc754cddc2a14d3f25ce95e37f02fb051b8a4</link>
<description>https://missing.csail.mit.edu/</description>
<size>1681026459</size>
</item><item>
<title>Reddit comments/submissions 2024-11</title>
<category>Dataset</category>
<infohash>03bc984c162b9f7818228b34a3671a64c1fbb17d</infohash>
<guid>https://academictorrents.com/details/03bc984c162b9f7818228b34a3671a64c1fbb17d</guid>
<link>https://academictorrents.com/details/03bc984c162b9f7818228b34a3671a64c1fbb17d</link>
<description>Reddit comments and submissions from 2024-11 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>51799890475</size>
</item><item>
<title>[udemy] Laravel 11 - From Basics to Advance (2024)</title>
<category>Course</category>
<infohash>da3b771fb498e367bc146806c4f192d590f24352</infohash>
<guid>https://academictorrents.com/details/da3b771fb498e367bc146806c4f192d590f24352</guid>
<link>https://academictorrents.com/details/da3b771fb498e367bc146806c4f192d590f24352</link>
<description>Official Course URL: udemy.com/course/laravel-11-from-basics-to-advance/ Course Overview: Dive into Laravel 11 - From Basics to Advance (2024), a comprehensive course designed to take you from foundational concepts to advanced techniques in Laravel 11. Whether you re a newcomer to web development or looking to enhance your skills, this course offers a structured path to mastering Laravel, enabling you to build professional-grade applications with confidence. What You ll Learn: - Routing and Controllers: Understand the flow of requests and responses in Laravel applications. - Blade Templates and Views: Create dynamic and reusable user interfaces efficiently. - Database Management: Utilize Models, Migrations, Seeders, and the Eloquent ORM for effective data handling. - Form Handling and Validation: Implement robust forms with built-in validation mechanisms. - File Storage and Management: Manage file uploads and storage seamlessly. - Middleware and HTTP Responses: Control request processing and customize responses. - Authentication and Authorization: Secure your applications with user authentication and access control. - Mail and Notifications: Integrate email functionalities for user communication. - Blade Components and Session Management: Enhance modularity and maintain user sessions. - Advanced Topics: Explore Queues, Background Processing, Observers, Event Listeners, Broadcasting, Service Container, and API development. - Hands-on Projects: Build real-world applications like an E-commerce Cart, Real-Time Messenger, and a Google Keep Clone to solidify your learning. Course Benefits: - Expert Instruction: Learn from seasoned developers with extensive experience in Laravel. - Practical Approach: Engage in hands-on projects that mirror real-world scenarios. - Comprehensive Curriculum: Gain a deep understanding of both basic and advanced Laravel concepts. - Flexible Learning: Access course materials anytime with lifetime access. - Community Support: Join a community of learners and receive assistance through Q&amp;A forums. Total Hours of Course: 51 hours 44 minutes Course Size: 4.58 GB Subtitles: English, Persian Who is this course for? This course is ideal for beginners in web development, aspiring Laravel developers, junior developers seeking to enhance their skills, PHP developers transitioning to Laravel, freelancers, entrepreneurs, students, educators, and professionals aiming for career growth in web development.</description>
<size>4926118354</size>
</item><item>
<title>enwiki-20241201-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>87098feed22fd5848964c6fc87de09cea94f020d</infohash>
<guid>https://academictorrents.com/details/87098feed22fd5848964c6fc87de09cea94f020d</guid>
<link>https://academictorrents.com/details/87098feed22fd5848964c6fc87de09cea94f020d</link>
<description>English Wikipedia Multistream 2024-12-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24318419130</size>
</item><item>
<title>Reddit comments/submissions 2024-11</title>
<category>Dataset</category>
<infohash>a1b490117808d9541ab9e3e67a3447e2f4f48f01</infohash>
<guid>https://academictorrents.com/details/a1b490117808d9541ab9e3e67a3447e2f4f48f01</guid>
<link>https://academictorrents.com/details/a1b490117808d9541ab9e3e67a3447e2f4f48f01</link>
<description>Reddit comments and submissions from 2024-11 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>50936316370</size>
</item><item>
<title>Unearthing a Billion Telegram Posts about the 2024 U.S. Presidential Election: Development of a Public Dataset (v2)</title>
<category>Dataset</category>
<infohash>5b3a589175108abbe2afcd8e77c92f97e5b6100f</infohash>
<guid>https://academictorrents.com/details/5b3a589175108abbe2afcd8e77c92f97e5b6100f</guid>
<link>https://academictorrents.com/details/5b3a589175108abbe2afcd8e77c92f97e5b6100f</link>
<description>With its lenient moderation policies and long-standing associations with potentially unlawful activities, Telegram has become an incubator for problematic content, frequently featuring conspiratorial, hyper-partisan, and fringe narratives. In the political sphere, these concerns are amplified by reports of Telegram channels being used to organize violent acts, such as those that occurred during the Capitol Hill attack on January 6, 2021. As the 2024 U.S. election approaches, Telegram remains a focal arena for societal and political discourse, warranting close attention from the research community, regulators, and the media. Based on these premises, we introduce and release a Telegram dataset focused on the 2024 U.S. Presidential Election, featuring over 30,000 chats and half a billion messages, including chat details, profile pictures, messages, and user information. We constructed a network of chats and analyzed the 500 most central ones, examining their shared messages. This resource represents the largest public Telegram dataset to date, offering an unprecedented opportunity to study political discussion on Telegram in the lead-up to the 2024 U.S. election. We will continue to collect data until the end of 2024, and routinely update the dataset released at: https://github.com/leonardo-blas/usc-tg-24-us-election.</description>
<size>630523537687</size>
</item><item>
<title>MIT OCW 18.S191 Introduction to Computational Thinking (Fall 2022)</title>
<category>Course</category>
<infohash>d72ad3cf23be9be6df34240d3ea24b3071003d9e</infohash>
<guid>https://academictorrents.com/details/d72ad3cf23be9be6df34240d3ea24b3071003d9e</guid>
<link>https://academictorrents.com/details/d72ad3cf23be9be6df34240d3ea24b3071003d9e</link>
<description>This class uses revolutionary programmable interactivity to combine material from three fields creating an engaging, efficient learning solution to prepare students to be sophisticated and intuitive thinkers, programmers, and solution providers for the modern interconnected online world. Upon completion, students are well trained to be scientific “trilinguals”, seeing and experimenting with mathematics interactively as math is meant to be seen, and ready to participate and contribute to open source development of large projects and ecosystems. More info: https://computationalthinking.mit.edu/Fall22/</description>
<size>4914198378</size>
</item><item>
<title>MIT OCW 14.13 Psychology and Economics (Spring 2020)</title>
<category>Course</category>
<infohash>c4f6dbdff92ef7455976f38568969684e085aa1d</infohash>
<guid>https://academictorrents.com/details/c4f6dbdff92ef7455976f38568969684e085aa1d</guid>
<link>https://academictorrents.com/details/c4f6dbdff92ef7455976f38568969684e085aa1d</link>
<description>Psychology and Economics (aka Behavioral Economics) is a growing subfield of economics that incorporates insights from psychology and other social sciences into economics. This course covers recent advances in behavioral economics by reviewing some of the assumptions made in mainstream economic models, and by discussing how human behavior systematically departs from these assumptions. Applications will cover a wide range of fields, including labor and public economics, industrial organization, health economics, finance, and development economics.</description>
<size>7081077225</size>
</item><item>
<title>MIT OCW 14.310x Data Analysis for Social Scientists (Spring 2023)</title>
<category>Course</category>
<infohash>2437f684caa1a06b0c3aad7dc184e3f89f897776</infohash>
<guid>https://academictorrents.com/details/2437f684caa1a06b0c3aad7dc184e3f89f897776</guid>
<link>https://academictorrents.com/details/2437f684caa1a06b0c3aad7dc184e3f89f897776</link>
<description>This course introduces methods for harnessing data to answer questions of cultural, social, economic, and policy interest. We will start with essential notions of probability and statistics. We will proceed to cover techniques in modern data analysis: regression and econometrics, design of experiments, randomized control trials (and A/B testing), machine learning, and data visualization. We will illustrate these concepts with applications drawn from real-world examples and frontier research. Finally, we will provide instruction on the use of the statistical package R, and opportunities for students to perform self-directed empirical analyses.</description>
<size>5899614012</size>
</item><item>
<title>Reddit comments/submissions 2024-10</title>
<category>Dataset</category>
<infohash>24119c565deeefe2f3e29818fcbe123174d1ece0</infohash>
<guid>https://academictorrents.com/details/24119c565deeefe2f3e29818fcbe123174d1ece0</guid>
<link>https://academictorrents.com/details/24119c565deeefe2f3e29818fcbe123174d1ece0</link>
<description>Reddit comments and submissions from 2024-10 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>52776919505</size>
</item><item>
<title>enwiki-20241101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>9095e723a5ca657d7308bb10a8e9d950f0d7165f</infohash>
<guid>https://academictorrents.com/details/9095e723a5ca657d7308bb10a8e9d950f0d7165f</guid>
<link>https://academictorrents.com/details/9095e723a5ca657d7308bb10a8e9d950f0d7165f</link>
<description>English Wikipedia Multistream 2024-11-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24208758564</size>
</item><item>
<title>Stack Exchange Data Dump (2024-02-29)</title>
<category>Dataset</category>
<infohash>5a3d9b0c0855056a5ef6e90d9766c3005145c979</infohash>
<guid>https://academictorrents.com/details/5a3d9b0c0855056a5ef6e90d9766c3005145c979</guid>
<link>https://academictorrents.com/details/5a3d9b0c0855056a5ef6e90d9766c3005145c979</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-02-29. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see https://stackoverflow.com/help/licensing. This torrent has a non-torrent download on archive.org: https://archive.org/details/stackexchange_20240305 - Note that the archive.org torrent there differs from the torrent reuploaded here! The archive.org date indicates the upload date, not the end date for content inclusion like the torrent name here does (as per a new convention). For more details, see https://meta.stackexchange.com/q/398279. The torrent uploaded here is an old version initially posted to https://archive.org/details/stackexchange, as per the community-generated archive list: https://meta.stackexchange.com/a/224922</description>
<size>91754594304</size>
</item><item>
<title>Stack Exchange Data Dump (2024-03-31)</title>
<category>Dataset</category>
<infohash>2ef5246c89679a43977b3b75eb6ab48bb15c73ae</infohash>
<guid>https://academictorrents.com/details/2ef5246c89679a43977b3b75eb6ab48bb15c73ae</guid>
<link>https://academictorrents.com/details/2ef5246c89679a43977b3b75eb6ab48bb15c73ae</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-03-31 - and an extraordinary release caused by a shift in the data dump schedule. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see https://stackoverflow.com/help/licensing. This torrent has a non-torrent download on archive.org: https://archive.org/details/stackexchange_20240402_bis - Note that the archive.org torrent there differs from the torrent reuploaded here! The torrent uploaded here is an old version initially posted to https://archive.org/details/stackexchange, as per the community-generated archive list: https://meta.stackexchange.com/a/224922</description>
<size>100067704832</size>
</item><item>
<title>Stack Exchange Data Dump (2024-06-30, re-release)</title>
<category>Dataset</category>
<infohash>42518003034f66c387df75c896e653644f402e7b</infohash>
<guid>https://academictorrents.com/details/42518003034f66c387df75c896e653644f402e7b</guid>
<link>https://academictorrents.com/details/42518003034f66c387df75c896e653644f402e7b</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-06-30. This version was re-released in late August with bugfixes from the initial, flawed 2024-06-30 release The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has been mirrored from https://archive.org/details/stackexchange_20240630_revised</description>
<size>84410368000</size>
</item><item>
<title>Stack Exchange Data Dump (2024-06-30, initial release)</title>
<category>Dataset</category>
<infohash>5afacf7e3d23c75e19d7d94be7e83208a5e8423a</infohash>
<guid>https://academictorrents.com/details/5afacf7e3d23c75e19d7d94be7e83208a5e8423a</guid>
<link>https://academictorrents.com/details/5afacf7e3d23c75e19d7d94be7e83208a5e8423a</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-06-30. This is the original release of the 2024-06-30 data dump, with some export issues. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has been mirrored from https://archive.org/details/stackexchange_20240630, though due to upload problems, the torrent was not auto-generated by archive.org.</description>
<size>97215578829</size>
</item><item>
<title>Stack Exchange Data Dump (2024-09-30)</title>
<category>Dataset</category>
<infohash>745b43ba30154eb2bc1df080b4c12de904edb2cb</infohash>
<guid>https://academictorrents.com/details/745b43ba30154eb2bc1df080b4c12de904edb2cb</guid>
<link>https://academictorrents.com/details/745b43ba30154eb2bc1df080b4c12de904edb2cb</link>
<description>This data dump is sourced from the various sites in the Stack Exchange network of Q&amp;A sites. This dump contains data up to and including 2024-09-30. The exact licenses for each bit of content is embedded in each entry. For license date ranges, see the root-level license.txt, or https://stackoverflow.com/help/licensing. For the schema, see the sede-and-data-dump-schema.md file within each .7z This torrent has been mirrored from https://archive.org/details/stackexchange_20240930</description>
<size>97135886336</size>
</item><item>
<title>Reddit comments/submissions 2024-10</title>
<category>Dataset</category>
<infohash>507dfcda29de9936dd77ed4f34c6442dc675c98f</infohash>
<guid>https://academictorrents.com/details/507dfcda29de9936dd77ed4f34c6442dc675c98f</guid>
<link>https://academictorrents.com/details/507dfcda29de9936dd77ed4f34c6442dc675c98f</link>
<description>Reddit comments and submissions from 2024-10 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>51894110519</size>
</item><item>
<title>Unearthing a Billion Telegram Posts about the 2024 U.S. Presidential Election: Development of a Public Dataset</title>
<category>Dataset</category>
<infohash>969ef8cbef89bcd6dc88e85e30a37a630c0ba76f</infohash>
<guid>https://academictorrents.com/details/969ef8cbef89bcd6dc88e85e30a37a630c0ba76f</guid>
<link>https://academictorrents.com/details/969ef8cbef89bcd6dc88e85e30a37a630c0ba76f</link>
<description>This dataset is outdated. To access the latest version, visit https://github.com/leonardo-blas/usc-tg-24-us-election.</description>
<size>492918618691</size>
</item><item>
<title>Reddit comments/submissions 2024-09</title>
<category>Dataset</category>
<infohash>2db688e16dff920b8403f68e45cdc61271eee79a</infohash>
<guid>https://academictorrents.com/details/2db688e16dff920b8403f68e45cdc61271eee79a</guid>
<link>https://academictorrents.com/details/2db688e16dff920b8403f68e45cdc61271eee79a</link>
<description>Reddit comments and submissions from 2024-09 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>50507683522</size>
</item><item>
<title>Qyber-SpinNet-elc-xx-rings-V1.0</title>
<category>Dataset</category>
<infohash>81417c73f676ee8fe5f31426cf200fe9fab3a728</infohash>
<guid>https://academictorrents.com/details/81417c73f676ee8fe5f31426cf200fe9fab3a728</guid>
<link>https://academictorrents.com/details/81417c73f676ee8fe5f31426cf200fe9fab3a728</link>
<description>This is a dataset to investigate the robustness of energy landscape controllers for transporting a single excitation in XX-coupled spin-1/2 rings of size 5 and 6. Transfers from spin 1 to spins 2,3 (for a 5-ring) and 2,3,4 (for a 6-ring) are considered (all others can be obtained from the symmetries in the rings). It also contains the robustness analysis of the controllers. For further details see https://qyber.black/spinnet/info-spinnet/</description>
<size>70967623680</size>
</item><item>
<title>Natural Born Genius (Human Genius Documentary)</title>
<category>Course</category>
<infohash>e7c7b54b47e4e8e5ab1acf91a4c9a67637e16074</infohash>
<guid>https://academictorrents.com/details/e7c7b54b47e4e8e5ab1acf91a4c9a67637e16074</guid>
<link>https://academictorrents.com/details/e7c7b54b47e4e8e5ab1acf91a4c9a67637e16074</link>
<description>3 November 1997 Natural Born Genius? Equinox episode Equinox episode of the defunct (July 1986 - December 2006) Channel 4 science documentary series Equinox. https://en.wikipedia.org/wiki/List_of_Equinox_episodes 3 November 1997 Natural Born Genius?  about inherited intelligence; the population geneticist Robert Plomin, who came to London in 1994; bioethicist Jonathan Glover; psychologist Camilla Benbow of Iowa State University, and the Study of Mathematically Precocious Youth; statistician Charles Spearman was the first to conduct research into general intelligence in 1904, and found that schoolchildren who were good at one academic subject were also good at other subjects and vice versa, and invented the term g factor (psychometrics), although Francis Galton had briefly looked at the subject; psychologist Ian Deary of the University of Edinburgh, where much research on intelligence has been conducted; Alfred Binet of France invented the intelligence test, so that low-scoring children could be given extra help; cognitive psychologist Michael Howe of the University of Exeter believed that the 11-plus exams in England, for grammar school entrance, could possibly label children; much educational testing took place in the US, as the government believed in it, by the Educational Testing Service (ETS); psychologist Stephen J. Ceci disputed reading cast-iron outcomes of individual intelligence tests, but nonetheless believed that general intelligence could be predicted across a population as a whole; Chris Allander of the Army School of Recruiting; Michael Howe conducted environmental education research of parental support, and if that was as much a factor as genetic or innate ability - environmental educational research largely ignored any genetic component, which Robert Plomin dismissed; psychologist Sandra Scarr had conducted much research on adopted children, which largely and conclusively proved the case for genetically inherited intelligence and abilities - and she believed that you could only teach subjects to children of whom had enough capability; Robert Plomin s wife, psychologist Judith Dunn, and varying intelligence of children within individual families; identical twins raised apart had identical IQ scores; psychologist Thalia C. Eley of the Institute of Psychiatry, Psychology and Neuroscience; from Robert Plomin s exhaustive adoption genetic studies, he finds a genetic link. Narrated by Barbara Flynn, produced by Rosalind Arden, directed by David Cresswell, made by John Gau Productions. John Gau had made Triumph of the Nerds the year before[42] Download:  https://workupload.com/file/4jnCU8wxPZv Streaming Site:  https://www.dailymotion.com/video/x7l0guz</description>
<size>807619272</size>
</item><item>
<title>enwiki-20241001-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>f52552399013035faac51b32aa57517e380dcf60</infohash>
<guid>https://academictorrents.com/details/f52552399013035faac51b32aa57517e380dcf60</guid>
<link>https://academictorrents.com/details/f52552399013035faac51b32aa57517e380dcf60</link>
<description>English Wikipedia Multistream 2024-10-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>24086538977</size>
</item><item>
<title>Reddit comments/submissions 2024-09</title>
<category>Dataset</category>
<infohash>43a6e113d6ecacf38e58ecc6caa28d68892dd8af</infohash>
<guid>https://academictorrents.com/details/43a6e113d6ecacf38e58ecc6caa28d68892dd8af</guid>
<link>https://academictorrents.com/details/43a6e113d6ecacf38e58ecc6caa28d68892dd8af</link>
<description>Reddit comments and submissions from 2024-09 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>49672608422</size>
</item><item>
<title>Reddit comments/submissions 2024-08</title>
<category>Dataset</category>
<infohash>24fcb01b37d8c3a0b835aa38b8a2f358d4b4f864</infohash>
<guid>https://academictorrents.com/details/24fcb01b37d8c3a0b835aa38b8a2f358d4b4f864</guid>
<link>https://academictorrents.com/details/24fcb01b37d8c3a0b835aa38b8a2f358d4b4f864</link>
<description>Reddit comments and submissions from 2024-08 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>52984231838</size>
</item><item>
<title>plate</title>
<category>Dataset</category>
<infohash>3eed269a4050bb7249bb3a7f44ee7fa3d461a70c</infohash>
<guid>https://academictorrents.com/details/3eed269a4050bb7249bb3a7f44ee7fa3d461a70c</guid>
<link>https://academictorrents.com/details/3eed269a4050bb7249bb3a7f44ee7fa3d461a70c</link>
<description>Cool things</description>
<size>3410239000</size>
</item><item>
<title>enwiki-20200201-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>efbfd62499b7ad8dad27957d762c967a062bb443</infohash>
<guid>https://academictorrents.com/details/efbfd62499b7ad8dad27957d762c967a062bb443</guid>
<link>https://academictorrents.com/details/efbfd62499b7ad8dad27957d762c967a062bb443</link>
<description>English Wikipedia Multistream 2020-02-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>17860235219</size>
</item><item>
<title>enwiki-20200301-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>526cf73f7523caa7fa8acfecd69000d475b18bae</infohash>
<guid>https://academictorrents.com/details/526cf73f7523caa7fa8acfecd69000d475b18bae</guid>
<link>https://academictorrents.com/details/526cf73f7523caa7fa8acfecd69000d475b18bae</link>
<description>English Wikipedia Multistream 2020-03-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>17958343307</size>
</item><item>
<title>enwiki-20200401-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>d7ed8702f74b6db246abf75b78fe1cee3addd405</infohash>
<guid>https://academictorrents.com/details/d7ed8702f74b6db246abf75b78fe1cee3addd405</guid>
<link>https://academictorrents.com/details/d7ed8702f74b6db246abf75b78fe1cee3addd405</link>
<description>English Wikipedia Multistream 2020-04-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>18128759136</size>
</item><item>
<title>enwiki-20201101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>ea7ac41f556030780002d0b00e9bb375e153a416</infohash>
<guid>https://academictorrents.com/details/ea7ac41f556030780002d0b00e9bb375e153a416</guid>
<link>https://academictorrents.com/details/ea7ac41f556030780002d0b00e9bb375e153a416</link>
<description>English Wikipedia Multistream 2020-11-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>18902274829</size>
</item><item>
<title>enwiki-20210101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>78d1bfa10317830a6282cb78ec43d8716dcea45a</infohash>
<guid>https://academictorrents.com/details/78d1bfa10317830a6282cb78ec43d8716dcea45a</guid>
<link>https://academictorrents.com/details/78d1bfa10317830a6282cb78ec43d8716dcea45a</link>
<description>English Wikipedia Multistream 2021-01-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19103554379</size>
</item><item>
<title>enwiki-20210401-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>c7cab5f54fab92dbbb7c085b942648f8f65fe799</infohash>
<guid>https://academictorrents.com/details/c7cab5f54fab92dbbb7c085b942648f8f65fe799</guid>
<link>https://academictorrents.com/details/c7cab5f54fab92dbbb7c085b942648f8f65fe799</link>
<description>English Wikipedia Multistream 2021-04-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19443502355</size>
</item><item>
<title>enwiki-20210501-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>668b87e182feadeb3b529ca8174afece7e92c555</infohash>
<guid>https://academictorrents.com/details/668b87e182feadeb3b529ca8174afece7e92c555</guid>
<link>https://academictorrents.com/details/668b87e182feadeb3b529ca8174afece7e92c555</link>
<description>English Wikipedia Multistream 2021-05-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19557287945</size>
</item><item>
<title>enwiki-20210601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>edb256b5686f048e2288f0cf5f26ff7f7d68a3db</infohash>
<guid>https://academictorrents.com/details/edb256b5686f048e2288f0cf5f26ff7f7d68a3db</guid>
<link>https://academictorrents.com/details/edb256b5686f048e2288f0cf5f26ff7f7d68a3db</link>
<description>English Wikipedia Multistream 2021-06-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19670238766</size>
</item><item>
<title>enwiki-20210701-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>10dcf0a8e750c42815e26df1fb092531d933d9fe</infohash>
<guid>https://academictorrents.com/details/10dcf0a8e750c42815e26df1fb092531d933d9fe</guid>
<link>https://academictorrents.com/details/10dcf0a8e750c42815e26df1fb092531d933d9fe</link>
<description>English Wikipedia Multistream 2021-07-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19773796684</size>
</item><item>
<title>enwiki-20210801-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>21a1627ba6c29d1864e0a90c53b0f3b243bb8806</infohash>
<guid>https://academictorrents.com/details/21a1627ba6c29d1864e0a90c53b0f3b243bb8806</guid>
<link>https://academictorrents.com/details/21a1627ba6c29d1864e0a90c53b0f3b243bb8806</link>
<description>English Wikipedia Multistream 2021-08-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>19890453793</size>
</item><item>
<title>enwiki-20211001-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>06696a70ea711bff3a54ad5122d4400281aeb677</infohash>
<guid>https://academictorrents.com/details/06696a70ea711bff3a54ad5122d4400281aeb677</guid>
<link>https://academictorrents.com/details/06696a70ea711bff3a54ad5122d4400281aeb677</link>
<description>English Wikipedia Multistream 2021-10-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>20087389866</size>
</item><item>
<title>enwiki-20220101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>1cb64bba43b71034613b9ecf559b4eb506dbfc70</infohash>
<guid>https://academictorrents.com/details/1cb64bba43b71034613b9ecf559b4eb506dbfc70</guid>
<link>https://academictorrents.com/details/1cb64bba43b71034613b9ecf559b4eb506dbfc70</link>
<description>English Wikipedia Multistream 2022-01-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>20393186102</size>
</item><item>
<title>enwiki-20230101-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>ae39ba10bc1ce9556e9efee7acceb8957880624b</infohash>
<guid>https://academictorrents.com/details/ae39ba10bc1ce9556e9efee7acceb8957880624b</guid>
<link>https://academictorrents.com/details/ae39ba10bc1ce9556e9efee7acceb8957880624b</link>
<description>English Wikipedia Multistream 2023-01-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>21542107623</size>
</item><item>
<title>enwiki-20230401-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>da028342c4848311c8fd0767ae07c9984b71f6af</infohash>
<guid>https://academictorrents.com/details/da028342c4848311c8fd0767ae07c9984b71f6af</guid>
<link>https://academictorrents.com/details/da028342c4848311c8fd0767ae07c9984b71f6af</link>
<description>English Wikipedia Multistream 2023-04-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>21843230219</size>
</item><item>
<title>enwiki-20240401-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>b95491df8cccbb018317d62836aa7fdc3df68b53</infohash>
<guid>https://academictorrents.com/details/b95491df8cccbb018317d62836aa7fdc3df68b53</guid>
<link>https://academictorrents.com/details/b95491df8cccbb018317d62836aa7fdc3df68b53</link>
<description>English Wikipedia Multistream 2024-04-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23071298480</size>
</item><item>
<title>enwiki-20240501-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>76c83ae82da1c325cfefa3b4074e270daf9ac164</infohash>
<guid>https://academictorrents.com/details/76c83ae82da1c325cfefa3b4074e270daf9ac164</guid>
<link>https://academictorrents.com/details/76c83ae82da1c325cfefa3b4074e270daf9ac164</link>
<description>English Wikipedia Multistream 2024-05-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23184900944</size>
</item><item>
<title>enwiki-20240601-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>6498dc9b779092e15c291c484541a583fc6c81ee</infohash>
<guid>https://academictorrents.com/details/6498dc9b779092e15c291c484541a583fc6c81ee</guid>
<link>https://academictorrents.com/details/6498dc9b779092e15c291c484541a583fc6c81ee</link>
<description>English Wikipedia Multistream 2024-06-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23648605741</size>
</item><item>
<title>Reddit comments/submissions 2024-08</title>
<category>Dataset</category>
<infohash>8c2d4b00ce8ff9d45e335bed106fe9046c60adb0</infohash>
<guid>https://academictorrents.com/details/8c2d4b00ce8ff9d45e335bed106fe9046c60adb0</guid>
<link>https://academictorrents.com/details/8c2d4b00ce8ff9d45e335bed106fe9046c60adb0</link>
<description>Reddit comments and submissions from 2024-08 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>52071825869</size>
</item><item>
<title>wikidata-20240902-all.json.bz2</title>
<category>Dataset</category>
<infohash>7bee8ece634c55ab4ed7da5a56dd81578729ed2b</infohash>
<guid>https://academictorrents.com/details/7bee8ece634c55ab4ed7da5a56dd81578729ed2b</guid>
<link>https://academictorrents.com/details/7bee8ece634c55ab4ed7da5a56dd81578729ed2b</link>
<description>Wikidata JSON All 2024-09-02</description>
<size>91964359511</size>
</item><item>
<title>enwiki-20240901-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>2f404a46276fa175fc455a8a008ebd8980b9decf</infohash>
<guid>https://academictorrents.com/details/2f404a46276fa175fc455a8a008ebd8980b9decf</guid>
<link>https://academictorrents.com/details/2f404a46276fa175fc455a8a008ebd8980b9decf</link>
<description>English Wikipedia Multistream 2024-09-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23964574374</size>
</item><item>
<title>wikidata-20240805-all.json.bz2</title>
<category>Dataset</category>
<infohash>fe977c0e13549b85bf14e37300676c0f8cc8e97a</infohash>
<guid>https://academictorrents.com/details/fe977c0e13549b85bf14e37300676c0f8cc8e97a</guid>
<link>https://academictorrents.com/details/fe977c0e13549b85bf14e37300676c0f8cc8e97a</link>
<description>Wikidata JSON All 2024-08-05</description>
<size>90941953140</size>
</item><item>
<title>wikidata-20240701-all.json.bz2</title>
<category>Dataset</category>
<infohash>dc083577b9f773ef0d41a3eba21b8694d5a56e99</infohash>
<guid>https://academictorrents.com/details/dc083577b9f773ef0d41a3eba21b8694d5a56e99</guid>
<link>https://academictorrents.com/details/dc083577b9f773ef0d41a3eba21b8694d5a56e99</link>
<description>Wikidata JSON All 2024-07-01</description>
<size>89940529332</size>
</item><item>
<title>VGGHeads.tar</title>
<category>Dataset</category>
<infohash>1ac36f16386061685ed303dea6f0d6179d2e2121</infohash>
<guid>https://academictorrents.com/details/1ac36f16386061685ed303dea6f0d6179d2e2121</guid>
<link>https://academictorrents.com/details/1ac36f16386061685ed303dea6f0d6179d2e2121</link>
<description>VGGHeads is a large scale synthetic dataset generated with diffusion models for human head detection and 3D mesh estimation. It comprises over 1 million high-resolution images, each annotated with detailed 3D head meshes, facial landmarks, and bounding boxes.</description>
<size>187111997440</size>
</item><item>
<title>Gab Posts - 2016-08 to 2018-10</title>
<category>Dataset</category>
<infohash>064f2953e8b16a9b33119874aa0b1a907d857bc1</infohash>
<guid>https://academictorrents.com/details/064f2953e8b16a9b33119874aa0b1a907d857bc1</guid>
<link>https://academictorrents.com/details/064f2953e8b16a9b33119874aa0b1a907d857bc1</link>
<description>These are Gab Social posts from 2016-08 to 2018-10 as collected by PushShift . Each month of posts is packaged in a bz2 compressed file.</description>
<size>6335721133</size>
</item><item>
<title>The Million Song Dataset.</title>
<category>Dataset</category>
<infohash>fecaeaf2f97a0cd9f62fdaafaac70a6a96fa4ac0</infohash>
<guid>https://academictorrents.com/details/fecaeaf2f97a0cd9f62fdaafaac70a6a96fa4ac0</guid>
<link>https://academictorrents.com/details/fecaeaf2f97a0cd9f62fdaafaac70a6a96fa4ac0</link>
<description>We introduce the Million Song Dataset, a freely-available collection of audio features and metadata for a million con- temporary popular music tracks. We describe its creation process, its content, and its possible uses. Attractive fea- tures of the Million Song Database include the range of ex- isting resources to which it is linked, and the fact that it is the largest current research dataset in our field. As an illustra- tion, we present year prediction as an example application, a task that has, until now, been difficult to study owing to the absence of a large set of suitable data. We show positive results on year prediction, and discuss more generally the future development of the dataset.</description>
<size>214163931939</size>
</item><item>
<title>ShitSpotter - 2024-07-03</title>
<category>Dataset</category>
<infohash>ee8d2c87a39ea9bfe48bef7eb4ca12eb68852c49</infohash>
<guid>https://academictorrents.com/details/ee8d2c87a39ea9bfe48bef7eb4ca12eb68852c49</guid>
<link>https://academictorrents.com/details/ee8d2c87a39ea9bfe48bef7eb4ca12eb68852c49</link>
<description>This is the version of the ShitSpotter dataset as of 2024-07-03, which is the first version distributed as a torrent. A newer version (that should share many files with this torrent) is available https://academictorrents.com/details/27a2512ae93298f75544be6d2d629dfb186f86cf</description>
<size>44949900501</size>
</item><item>
<title>Reddit comments/submissions 2024-07</title>
<category>Dataset</category>
<infohash>efa08a6825abca13f7dfb7c4d0d56028ba5003f2</infohash>
<guid>https://academictorrents.com/details/efa08a6825abca13f7dfb7c4d0d56028ba5003f2</guid>
<link>https://academictorrents.com/details/efa08a6825abca13f7dfb7c4d0d56028ba5003f2</link>
<description>Reddit comments and submissions from 2024-07 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>51503900423</size>
</item><item>
<title>enwiki-20240801-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>32df1503a39d71cc1840c29f60b53ee9ede0bd6a</infohash>
<guid>https://academictorrents.com/details/32df1503a39d71cc1840c29f60b53ee9ede0bd6a</guid>
<link>https://academictorrents.com/details/32df1503a39d71cc1840c29f60b53ee9ede0bd6a</link>
<description>English Wikipedia Multistream 2024-08-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23851879117</size>
</item><item>
<title>enwiki-20240701-pages-articles-multistream.xml.bz2</title>
<category>Dataset</category>
<infohash>99ead3f8c8d181b4aa27aa46d9cd12cb1c8ff9ba</infohash>
<guid>https://academictorrents.com/details/99ead3f8c8d181b4aa27aa46d9cd12cb1c8ff9ba</guid>
<link>https://academictorrents.com/details/99ead3f8c8d181b4aa27aa46d9cd12cb1c8ff9ba</link>
<description>English Wikipedia Multistream 2024-07-01 https://en.wikipedia.org/wiki/Wikipedia:Database_download</description>
<size>23735137856</size>
</item><item>
<title>Invest Epitome Public DatasetV24.2</title>
<category>Dataset</category>
<infohash>7beca571bcadd509492c7fbdef04a792dad257c1</infohash>
<guid>https://academictorrents.com/details/7beca571bcadd509492c7fbdef04a792dad257c1</guid>
<link>https://academictorrents.com/details/7beca571bcadd509492c7fbdef04a792dad257c1</link>
<description>This is dataset to which include company financial fillings, announcements, stock data, shareholding pattern etc. This is dataset will have future updated versions time to time So please track a latest version. All data is from public domain. THIS IS FOR EDUCATIONAL PURPOSE ONLY. PLEASE DO NOT TAKE ANY FINANCIAL DECISION ON THIS. THIS DATA IS COLLECTED AND OPERATED WITH COMPUTER SCRIPTS SO IT MAY CHANGED UNKNOWNGLY.</description>
<size>7658025415</size>
</item><item>
<title>Reddit comments/submissions 2024-07</title>
<category>Dataset</category>
<infohash>6e5300446bd9b328d0b812cdb3022891e086d9ec</infohash>
<guid>https://academictorrents.com/details/6e5300446bd9b328d0b812cdb3022891e086d9ec</guid>
<link>https://academictorrents.com/details/6e5300446bd9b328d0b812cdb3022891e086d9ec</link>
<description>Reddit comments and submissions from 2024-07</description>
<size>50641523167</size>
</item><item>
<title>TTLs matter: Efficient cache sizing with TTL-aware miss ratio curves and working set sizes</title>
<category>Dataset</category>
<infohash>16072e74ec3fd902be5167e0538cf37df48ededd</infohash>
<guid>https://academictorrents.com/details/16072e74ec3fd902be5167e0538cf37df48ededd</guid>
<link>https://academictorrents.com/details/16072e74ec3fd902be5167e0538cf37df48ededd</link>
<description>In-memory cache access traces used in the paper titled "TTLs matter: Efficient cache sizing with TTL-aware miss ratio curves and working set sizes" EuroSys 24 Paper link: https://dl.acm.org/doi/abs/10.1145/3627703.3650066 GitHub: https://github.com/SariSultan/TTLsMatter-EuroSys24</description>
<size>737395649483</size>
</item><item>
<title>Learning a Part-Level Motion Prior for Articulated Objects</title>
<category>Dataset</category>
<infohash>2e955e41f40147603641573b7e839efae9af9a7f</infohash>
<guid>https://academictorrents.com/details/2e955e41f40147603641573b7e839efae9af9a7f</guid>
<link>https://academictorrents.com/details/2e955e41f40147603641573b7e839efae9af9a7f</link>
<description/>
<size>2071703797760</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2024-06</title>
<category>Dataset</category>
<infohash>20520c420c6c846f555523babc8c059e9daa8fc5</infohash>
<guid>https://academictorrents.com/details/20520c420c6c846f555523babc8c059e9daa8fc5</guid>
<link>https://academictorrents.com/details/20520c420c6c846f555523babc8c059e9daa8fc5</link>
<description>Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps The more recent dumps are collected by u/RaiderBDev</description>
<size>2813057045013</size>
</item><item>
<title>Reddit comments/submissions 2024-06</title>
<category>Dataset</category>
<infohash>dcdecc93ca9a9d758c045345112771cef5b4989a</infohash>
<guid>https://academictorrents.com/details/dcdecc93ca9a9d758c045345112771cef5b4989a</guid>
<link>https://academictorrents.com/details/dcdecc93ca9a9d758c045345112771cef5b4989a</link>
<description>Reddit comments and submissions from 2024-06 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>47526093790</size>
</item><item>
<title>Reddit comments/submissions 2024-05</title>
<category>Dataset</category>
<infohash>c551adb561b0feae1bdcb844287324908eaf403f</infohash>
<guid>https://academictorrents.com/details/c551adb561b0feae1bdcb844287324908eaf403f</guid>
<link>https://academictorrents.com/details/c551adb561b0feae1bdcb844287324908eaf403f</link>
<description>Reddit comments and submissions from 2024-05 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>48946321057</size>
</item><item>
<title>Reddit comments/submissions 2024-05</title>
<category>Dataset</category>
<infohash>4f60634d96d35158842cd58b495dc3b444d78b0d</infohash>
<guid>https://academictorrents.com/details/4f60634d96d35158842cd58b495dc3b444d78b0d</guid>
<link>https://academictorrents.com/details/4f60634d96d35158842cd58b495dc3b444d78b0d</link>
<description>Reddit comments and submissions from 2024-05 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>48112973520</size>
</item><item>
<title>db.sqlite3</title>
<category>Dataset</category>
<infohash>674d3fbbca65c46c0ba52a65658aef0c8fc99e86</infohash>
<guid>https://academictorrents.com/details/674d3fbbca65c46c0ba52a65658aef0c8fc99e86</guid>
<link>https://academictorrents.com/details/674d3fbbca65c46c0ba52a65658aef0c8fc99e86</link>
<description>This database is an index of the location of DOIs within the Crossref Annual Data File 2024. The data file can be found here: https://academictorrents.com/details/4426fa56a4f3d376ece9ac37ed088095a30de568 A tool for using this database is available here: https://gitlab.com/crossref/labs/labs-data-file-api Essentially, this tool will help you to locate a DOI within the annual data file, a process that can take up to 6 hours without an index.</description>
<size>14632935424</size>
</item><item>
<title>Reddit comments/submissions 2024-04</title>
<category>Dataset</category>
<infohash>9b29491dccf7d9d72e5538ce8b647cf8ed43fb34</infohash>
<guid>https://academictorrents.com/details/9b29491dccf7d9d72e5538ce8b647cf8ed43fb34</guid>
<link>https://academictorrents.com/details/9b29491dccf7d9d72e5538ce8b647cf8ed43fb34</link>
<description>Reddit comments and submissions from 2024-03 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps These are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>48950945093</size>
</item><item>
<title>[Sample Dataset] April 2024 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>d47fbe29e5ef93a6695421f79a6efa4b801acff1</infohash>
<guid>https://academictorrents.com/details/d47fbe29e5ef93a6695421f79a6efa4b801acff1</guid>
<link>https://academictorrents.com/details/d47fbe29e5ef93a6695421f79a6efa4b801acff1</link>
<description>[Sample Dataset] April 2024 Public Data File from Crossref. This dataset includes 100 random JSON records from the Crossref metadata corpus.</description>
<size>19721846</size>
</item><item>
<title>April 2024 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>4426fa56a4f3d376ece9ac37ed088095a30de568</infohash>
<guid>https://academictorrents.com/details/4426fa56a4f3d376ece9ac37ed088095a30de568</guid>
<link>https://academictorrents.com/details/4426fa56a4f3d376ece9ac37ed088095a30de568</link>
<description>Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through April 2024 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org And, if you do use our API, we encourage you to read the section of the documentation on "etiquette". That is, how to use the API without making it impossible for others to use.</description>
<size>212131805477</size>
</item><item>
<title>Reddit comments/submissions 2024-04</title>
<category>Dataset</category>
<infohash>ad4617a3e9c1f52405197fc088b28a8018e12a7a</infohash>
<guid>https://academictorrents.com/details/ad4617a3e9c1f52405197fc088b28a8018e12a7a</guid>
<link>https://academictorrents.com/details/ad4617a3e9c1f52405197fc088b28a8018e12a7a</link>
<description>Reddit comments and submissions from 2024-04 Documentation, json schemas and more can be found at https://github.com/ArthurHeitmann/arctic_shift Helper scripts for processing files can be found at https://github.com/Watchful1/PushshiftDumps</description>
<size>58669411720</size>
</item><item>
<title>Reddit comments/submissions 2024-03</title>
<category>Dataset</category>
<infohash>deef710de36929e0aa77200fddda73c86142372c</infohash>
<guid>https://academictorrents.com/details/deef710de36929e0aa77200fddda73c86142372c</guid>
<link>https://academictorrents.com/details/deef710de36929e0aa77200fddda73c86142372c</link>
<description>Reddit comments and submissions from 2024-03 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps These are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>50175875574</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 24-03</title>
<category>Dataset</category>
<infohash>ca989aa94cbd0ac5258553500d9b0f3584f6e4f7</infohash>
<guid>https://academictorrents.com/details/ca989aa94cbd0ac5258553500d9b0f3584f6e4f7</guid>
<link>https://academictorrents.com/details/ca989aa94cbd0ac5258553500d9b0f3584f6e4f7</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>80892588064</size>
</item><item>
<title>NSCLC-Radiomics</title>
<category>Dataset</category>
<infohash>dab2bec8e2e96350813ecb590b563778df82122d</infohash>
<guid>https://academictorrents.com/details/dab2bec8e2e96350813ecb590b563778df82122d</guid>
<link>https://academictorrents.com/details/dab2bec8e2e96350813ecb590b563778df82122d</link>
<description>This collection contains images from 422 non-small cell lung cancer (NSCLC) patients. For these patients pretreatment CT scans, manual delineation by a radiation oncologist of the 3D volume of the gross tumor volume and clinical outcome data are available. This dataset refers to the Lung1 dataset of the study published in Nature Communications. In short, this publication applies a radiomic approach to computed tomography data of 1,019 patients with lung or head-and-neck cancer. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. In present analysis 440 features quantifying tumour image intensity, shape and texture, were extracted.  We found that a large number of radiomic features have prognostic power in independent data sets, many of which were not identified as significant before. Radiogenomics analysis revealed that a prognostic radiomic signature, capturing intra-tumour heterogeneity, was associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost. The DICOM Radiotherapy Structure Sets (RTSTRUCT) and DICOM Segmentation (SEG) files in this data contain a manual delineation by a radiation oncologist of the 3D volume of the primary gross tumor volume ("GTV-1") and selected anatomical structures (i.e., lung, heart and esophagus). Of note, DICOM SEG objects contain a subset of annotations available in RTSTRUCT. The dataset described here (Lung1) was used to build a prognostic radiomic signature. The Lung3 dataset used to investigate the association of radiomic imaging features with gene-expression profiles consisting of 89 NSCLC CT scans with outcome data can be found here: NSCLC-Radiomics-Genomics.</description>
<size>11484823273</size>
</item><item>
<title>Udemy - Spring Boot Microservices with Spring Cloud Beginner to Guru</title>
<category>Paper</category>
<infohash>4f77e80bc929a56fefde1dda5b79f7eeef4b982a</infohash>
<guid>https://academictorrents.com/details/4f77e80bc929a56fefde1dda5b79f7eeef4b982a</guid>
<link>https://academictorrents.com/details/4f77e80bc929a56fefde1dda5b79f7eeef4b982a</link>
<description/>
<size>15506359538</size>
</item><item>
<title>AmigosCode - Java Essentials</title>
<category>Course</category>
<infohash>7a59acd9ee3860278b6e0b551f80cfc6ef298456</infohash>
<guid>https://academictorrents.com/details/7a59acd9ee3860278b6e0b551f80cfc6ef298456</guid>
<link>https://academictorrents.com/details/7a59acd9ee3860278b6e0b551f80cfc6ef298456</link>
<description/>
<size>2307596751</size>
</item><item>
<title>AmigosCode - Java Streams API</title>
<category>Course</category>
<infohash>0be1fe5bc3535dfe81f8b0c8a345f76d0a05a184</infohash>
<guid>https://academictorrents.com/details/0be1fe5bc3535dfe81f8b0c8a345f76d0a05a184</guid>
<link>https://academictorrents.com/details/0be1fe5bc3535dfe81f8b0c8a345f76d0a05a184</link>
<description/>
<size>2549896829</size>
</item><item>
<title>AmigosCode - Mastering Kubernetes</title>
<category>Course</category>
<infohash>4cf72a8e5d53e4f8617088a605607c831352325a</infohash>
<guid>https://academictorrents.com/details/4cf72a8e5d53e4f8617088a605607c831352325a</guid>
<link>https://academictorrents.com/details/4cf72a8e5d53e4f8617088a605607c831352325a</link>
<description/>
<size>1907080863</size>
</item><item>
<title>DCASE 2024 Task 5: Few-shot Bioacoustic Event Detection Development Set v1</title>
<category>Dataset</category>
<infohash>4cc83d8acf726e2cc5eadfedfc4e1f00bdb430be</infohash>
<guid>https://academictorrents.com/details/4cc83d8acf726e2cc5eadfedfc4e1f00bdb430be</guid>
<link>https://academictorrents.com/details/4cc83d8acf726e2cc5eadfedfc4e1f00bdb430be</link>
<description>See also Zenodo https://zenodo.org/records/10829604</description>
<size>21890439038</size>
</item><item>
<title>voxceleb</title>
<category>Dataset</category>
<infohash>bdd9f57a6f47aa197f502b68bc0195f5ac786ec4</infohash>
<guid>https://academictorrents.com/details/bdd9f57a6f47aa197f502b68bc0195f5ac786ec4</guid>
<link>https://academictorrents.com/details/bdd9f57a6f47aa197f502b68bc0195f5ac786ec4</link>
<description>This torrent shares the VoxCeleb1 and VoxCeleb2 datasets. The original dataset creators do not provide access to the dataset anymore. To ensure papers in the field of speaker recognition can be reproduced (many have used VoxCeleb in recent years) the data should be available for academic purposes. The audio data is stored as mono-channel, 16000hz, signed 16-bit (little-endian) PCM wav files. This torrent does not include video data.</description>
<size>274288526425</size>
</item><item>
<title>geoBoundaries Version 4</title>
<category>Dataset</category>
<infohash>6a6282f84a237930022960f0ec17e4ac2934ee8a</infohash>
<guid>https://academictorrents.com/details/6a6282f84a237930022960f0ec17e4ac2934ee8a</guid>
<link>https://academictorrents.com/details/6a6282f84a237930022960f0ec17e4ac2934ee8a</link>
<description>"comment":"Geoboundaries Version 4","created by":"Daniel Miller Runfola" This is an archive of the 4.0.0 release of geoBoundaries (www.geoboundaries.org), which was released on August 31 of 2021. It is provided for research replication and historical analysis.</description>
<size>47720475778</size>
</item><item>
<title>geoBoundaries Version 3</title>
<category>Dataset</category>
<infohash>951d2e204a14b99ee14ad16acd3a74f083086533</infohash>
<guid>https://academictorrents.com/details/951d2e204a14b99ee14ad16acd3a74f083086533</guid>
<link>https://academictorrents.com/details/951d2e204a14b99ee14ad16acd3a74f083086533</link>
<description>"comment":"Geoboundaries Version 3","created by":"Daniel Miller Runfola" This is an archive of the 3.0.0 release of geoBoundaries (www.geoboundaries.org), which was released on June 5 of 2020. It is provided for research replication and historical analysis.</description>
<size>53097825524</size>
</item><item>
<title>geoBoundaries Version 2</title>
<category>Dataset</category>
<infohash>7547ad44ed9b4252379da4dff197b5d9020dd4d6</infohash>
<guid>https://academictorrents.com/details/7547ad44ed9b4252379da4dff197b5d9020dd4d6</guid>
<link>https://academictorrents.com/details/7547ad44ed9b4252379da4dff197b5d9020dd4d6</link>
<description>"comment":"Geoboundaries Version 2","created by":"Daniel Miller Runfola" Archival data for geoBoundaries 2.0.1 Version 2.0.1; Released 3/4/2020 No new boundary information in this build; technical fixes only. Minor fixes to zipfile builds (wmgeolab/geoBoundaries#1)</description>
<size>57391640706</size>
</item><item>
<title>geoBoundaries Version 5</title>
<category>Dataset</category>
<infohash>2fc020dac1cdcd28a85df96fd99f1618b9bb26ba</infohash>
<guid>https://academictorrents.com/details/2fc020dac1cdcd28a85df96fd99f1618b9bb26ba</guid>
<link>https://academictorrents.com/details/2fc020dac1cdcd28a85df96fd99f1618b9bb26ba</link>
<description>"comment":"Geoboundaries Version 5","created by":"Daniel Miller Runfola" Built by the community and William &amp; Mary geoLab, the geoBoundaries Global Database of Political Administrative Boundaries Database is an online, open license (CC BY 4.0 / ODbL) resource of information on administrative boundaries (i.e., state, county) for every country in the world. Since 2016, we have tracked approximately 1 million boundaries within over 200 entities, including all UN member states. All boundaries are available to view or download in common file formats; the only requirement for use is acknowledgement.</description>
<size>60143947361</size>
</item><item>
<title>geoBoundaries Version 1</title>
<category>Dataset</category>
<infohash>3c3802a3d61b83c674e9c61879ae808920587de4</infohash>
<guid>https://academictorrents.com/details/3c3802a3d61b83c674e9c61879ae808920587de4</guid>
<link>https://academictorrents.com/details/3c3802a3d61b83c674e9c61879ae808920587de4</link>
<description>"comment":"Geoboundaries Version 1","created by":"Daniel Miller Runfola" Version 1.3.3; Released 8/30/2018 Simplified geoJsons in release Version 1.3.0 ( Mews ); Released 4/23/2018 Public geojson and shapefile release. ADM0 complete coverage; ADM1 close to complete. Version 1.0.0 ( Tarbean ); Released 11/16/2017 First version of geoBoundaries.</description>
<size>12561560679</size>
</item><item>
<title>Behavioral Genetics - Matt McGue - ISIR 2022 Distinguished Contributor Interview (MP4 Video)</title>
<category>Course</category>
<infohash>d69ae0222937e1c3f2805925bbc8c745d77c3901</infohash>
<guid>https://academictorrents.com/details/d69ae0222937e1c3f2805925bbc8c745d77c3901</guid>
<link>https://academictorrents.com/details/d69ae0222937e1c3f2805925bbc8c745d77c3901</link>
<description>At our 2022 conference at the University of Vienna, we were treated to a fantastic interview by David Lubinski of Matt McGue, this year’s pick for the Distinguished Contributor Interview! Professor McGue is beloved within the University of Minnesota’s psychology department, where he is Regents Professor and co-director of the Minnesota Center for Twin and Family Research, as well as in the broader community of behavioral genetics and intelligence research. An author of over 375 papers with nearly 75,000 citations as of 2022, Matt has won the Dobzhansky Award for Lifetime Contributions to Behavioral Genetic Research (2011), the Shields Award for Lifetime Contributions to Twin Research (2007), and many other awards and honors. Tune in for a fascinating discussion about Matt’s early history, his past and current research, and the future of our field.</description>
<size>170446821</size>
</item><item>
<title>Behavioral Genetics - Matt McGue - ISIR 2022 Keynote Address (MP4 Video)</title>
<category>Course</category>
<infohash>b7473b9d5eafe1b866d9e150a9be1ce26d9dbdda</infohash>
<guid>https://academictorrents.com/details/b7473b9d5eafe1b866d9e150a9be1ce26d9dbdda</guid>
<link>https://academictorrents.com/details/b7473b9d5eafe1b866d9e150a9be1ce26d9dbdda</link>
<description>Vienna 2022 Keynote: Matt McGue After a long wait, we’ve finally uploaded Matt McGue’s wonderful keynote address at the IMC pre-conference welcome event at the 2022 ISIR in Vienna, Austria, entitled "Without merit: Is an imperfect system worth saving?" Those of you who saw it live will surely recall it fondly; for those who haven’t seen it, enjoy this treat while we anticipate this year’s Zürich conference.</description>
<size>494942074</size>
</item><item>
<title>Behavioral Genetics - Matt McGue ISIR 2022 Distinguished Contributor Interview</title>
<category>Course</category>
<infohash>a34d0e3ae41bc29b8c584f1fdd80ea00b1765346</infohash>
<guid>https://academictorrents.com/details/a34d0e3ae41bc29b8c584f1fdd80ea00b1765346</guid>
<link>https://academictorrents.com/details/a34d0e3ae41bc29b8c584f1fdd80ea00b1765346</link>
<description>Behavioral Genetics - Matt McGue ISIR 2022 Distinguished Contributor Interview At our 2022 conference at the University of Vienna, we were treated to a fantastic interview by David Lubinski of Matt McGue, this year’s pick for the Distinguished Contributor Interview! Professor McGue is beloved within the University of Minnesota’s psychology department, where he is Regents Professor and co-director of the Minnesota Center for Twin and Family Research, as well as in the broader community of behavioral genetics and intelligence research. An author of over 375 papers with nearly 75,000 citations as of 2022, Matt has won the Dobzhansky Award for Lifetime Contributions to Behavioral Genetic Research (2011), the Shields Award for Lifetime Contributions to Twin Research (2007), and many other awards and honors. Tune in for a fascinating discussion about Matt’s early history, his past and current research, and the future of our field.</description>
<size>97103701</size>
</item><item>
<title>Behavioral Genetics - Matt McGue ISIR 2022 Keynote Address</title>
<category>Course</category>
<infohash>5d6df954f57681792e02f8bf383a2535de9f0b2a</infohash>
<guid>https://academictorrents.com/details/5d6df954f57681792e02f8bf383a2535de9f0b2a</guid>
<link>https://academictorrents.com/details/5d6df954f57681792e02f8bf383a2535de9f0b2a</link>
<description>Without merit: Is an imperfect system worth saving?</description>
<size>101914800</size>
</item><item>
<title>waterbird_complete95_forest2water2.tar.gz</title>
<category>Dataset</category>
<infohash>63d430cb586e95e57397236ef99aee02cd500d1a</infohash>
<guid>https://academictorrents.com/details/63d430cb586e95e57397236ef99aee02cd500d1a</guid>
<link>https://academictorrents.com/details/63d430cb586e95e57397236ef99aee02cd500d1a</link>
<description>The Waterbirds dataset is constructed by cropping out birds from photos in the Caltech-UCSD Birds-200-2011 (CUB) dataset (Wah et al., 2011) and transferring them onto backgrounds from the Places dataset (Zhou et al., 2017).</description>
<size>489698023</size>
</item><item>
<title>grok-1</title>
<category>Dataset</category>
<infohash>5f96d43576e3d386c9ba65b883210a393b68210e</infohash>
<guid>https://academictorrents.com/details/5f96d43576e3d386c9ba65b883210a393b68210e</guid>
<link>https://academictorrents.com/details/5f96d43576e3d386c9ba65b883210a393b68210e</link>
<description>Grok-1 is a 314B parameter Mixture of Experts model - Base model (not finetuned) - 8 experts (2 active) - 86B active parameters - Apache 2.0 license - Code: https://github.com/xai-org/grok-1 - Happy coding! p.s. we re hiring: https://x.ai/career</description>
<size>318239939300</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 24-02</title>
<category>Dataset</category>
<infohash>1dc131c38d09d8f3912a0040a9a7434ffccc1c78</infohash>
<guid>https://academictorrents.com/details/1dc131c38d09d8f3912a0040a9a7434ffccc1c78</guid>
<link>https://academictorrents.com/details/1dc131c38d09d8f3912a0040a9a7434ffccc1c78</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>76517867464</size>
</item><item>
<title>Reddit comments/submissions 2024-02</title>
<category>Dataset</category>
<infohash>5969ae3e21bb481fea63bf649ec933c222c1f824</infohash>
<guid>https://academictorrents.com/details/5969ae3e21bb481fea63bf649ec933c222c1f824</guid>
<link>https://academictorrents.com/details/5969ae3e21bb481fea63bf649ec933c222c1f824</link>
<description>Reddit comments and submissions from 2024-02 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps These are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>47323892024</size>
</item><item>
<title>Icentia11k-wfdb</title>
<category>Dataset</category>
<infohash>d911a1a4454186a862226605dd6e3f6f06b92b53</infohash>
<guid>https://academictorrents.com/details/d911a1a4454186a862226605dd6e3f6f06b92b53</guid>
<link>https://academictorrents.com/details/d911a1a4454186a862226605dd6e3f6f06b92b53</link>
<description>This is the wfdb version of the Icentia11k dataset: https://github.com/shawntan/icentia-ecg/blob/master/physionet/wfdb_data_demo.ipynb We release the largest public ECG dataset of raw signals for representation learning containing over 11k patients and 2 billion labelled beats. Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events. To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery. https://i.imgur.com/5PxNneL.png License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) http://creativecommons.org/licenses/by-nc-sa/4.0/</description>
<size>202154613899</size>
</item><item>
<title>Blind Video Deflickering by Neural Filtering with a Flawed Atlas</title>
<category>Dataset</category>
<infohash>78b25af19283c073fffb47ed0968e0e10db30ed8</infohash>
<guid>https://academictorrents.com/details/78b25af19283c073fffb47ed0968e0e10db30ed8</guid>
<link>https://academictorrents.com/details/78b25af19283c073fffb47ed0968e0e10db30ed8</link>
<description/>
<size>79958693784</size>
</item><item>
<title>Reddit subreddits metadata 2024-01</title>
<category>Dataset</category>
<infohash>c902f4b65f0e82a5e37db205c3405f02a028ecdf</infohash>
<guid>https://academictorrents.com/details/c902f4b65f0e82a5e37db205c3405f02a028ecdf</guid>
<link>https://academictorrents.com/details/c902f4b65f0e82a5e37db205c3405f02a028ecdf</link>
<description>Information and statistics of 18 million subreddits, retrieved in January 2024. Of those, 2 million were no longer available (private, banned, quarantined, etc.). Those are separate in subreddits_meta_only_2024-01.zst and only contain the name, id, potentially subscribers and statistics. Statistics contain aggregate information from the pushshift and arctic shift datasets: date of earliest post &amp; comment, number of posts &amp; comments, when that data was last updated. General documentation and other reddit related data can be found at https://github.com/ArthurHeitmann/arctic_shift JSON schemas specifically are at https://github.com/ArthurHeitmann/arctic_shift/tree/master/schemas</description>
<size>1848182612</size>
</item><item>
<title>w2v2-ssl-checkpoints</title>
<category>Dataset</category>
<infohash>4dcb2fbd6cba0b3e450ae851abd4cad6c7289087</infohash>
<guid>https://academictorrents.com/details/4dcb2fbd6cba0b3e450ae851abd4cad6c7289087</guid>
<link>https://academictorrents.com/details/4dcb2fbd6cba0b3e450ae851abd4cad6c7289087</link>
<description>This is a companion dataset to the paper "The Effect of Batch Size on Contrastive Self-Supervised Speech Representation Learning". We provide the progression checkpoint during pre-training (every 5k steps) with various batch sizes.</description>
<size>226246036688</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 24-01</title>
<category>Dataset</category>
<infohash>c440a293602270f03a47e3110a174365b965a093</infohash>
<guid>https://academictorrents.com/details/c440a293602270f03a47e3110a174365b965a093</guid>
<link>https://academictorrents.com/details/c440a293602270f03a47e3110a174365b965a093</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>82055130873</size>
</item><item>
<title>Reddit comments/submissions 2024-01</title>
<category>Dataset</category>
<infohash>ac88546145ca3227e2b90e51ab477c4527dd8b90</infohash>
<guid>https://academictorrents.com/details/ac88546145ca3227e2b90e51ab477c4527dd8b90</guid>
<link>https://academictorrents.com/details/ac88546145ca3227e2b90e51ab477c4527dd8b90</link>
<description>Reddit comments and submissions from 2024-01 collected by u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps These are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift Previous months can be found here https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</description>
<size>50548237069</size>
</item><item>
<title>Most Popular Email Domains in Collections 1-5 &amp; ANTIPUBLIC &amp; MYR &amp; Zabugor</title>
<category>Dataset</category>
<infohash>4cd7c31e445be47bb358481572ae7415dc6f4b6e</infohash>
<guid>https://academictorrents.com/details/4cd7c31e445be47bb358481572ae7415dc6f4b6e</guid>
<link>https://academictorrents.com/details/4cd7c31e445be47bb358481572ae7415dc6f4b6e</link>
<description>These are the most popular domains mentioned in the Collections 1-5, ANTIPUBLIC, MYR, and Zabugor breach compilations, sorted by most to least references. No other data is included. Researchers can use these domains to look at the prevalence of typoes, investigate spam campaign domains, *roughly* identify what a popular mailserver might be, etc. Please note, this dataset is heavily biased because of the various sources used. It is not a definitive measure of “this email provider is the most popular” or *anything like that* - it only tells you that these domains were the most popular as found in this *particular* compilation.</description>
<size>139419630</size>
</item><item>
<title>Subreddit comments/submissions 2005-06 to 2023-12</title>
<category>Dataset</category>
<infohash>56aa49f9653ba545f48df2e33679f014d2829c10</infohash>
<guid>https://academictorrents.com/details/56aa49f9653ba545f48df2e33679f014d2829c10</guid>
<link>https://academictorrents.com/details/56aa49f9653ba545f48df2e33679f014d2829c10</link>
<description>This is the top 40,000 subreddits from reddit s history in separate files. You can use your torrent client to only download the subreddit s you re interested in. These are from the pushshift dumps from 2005-06 to 2023-12 which can be found here https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps If you have questions, please reply to this reddit post or DM u/Watchful on reddit or respond to this post https://www.reddit.com/r/pushshift/comments/1akrhg3/separate_dump_files_for_the_top_40k_subreddits/</description>
<size>2643636050426</size>
</item><item>
<title>Appin Uncensored</title>
<category>Dataset</category>
<infohash>4919c48057bd2bb3e02261de44483a30d18542c5</infohash>
<guid>https://academictorrents.com/details/4919c48057bd2bb3e02261de44483a30d18542c5</guid>
<link>https://academictorrents.com/details/4919c48057bd2bb3e02261de44483a30d18542c5</link>
<description>From [DDoSecrets post "Appin Uncensored"](https://ddosecrets.com/wiki/Appin_Uncensored) &gt; On November 16, 2023 [Reuters](https://www.reuters.com/) published a special report titled [How an Indian startup hacked the world](https://www.reuters.com/investigates/special-report/usa-hackers-appin/), documenting how the Indian firm Appin "grew from an educational startup to a hack-for-hire powerhouse that stole secrets from executives, politicians, military officials and wealthy elites around the globe." &gt; &gt; In response to the exposé, Appin filed a lawsuit in India against Reuters. Following a [preliminary court order](https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/) issued by a district court in New Delhi on December 4, 2023, Reuters temporarily removed the article, noting that it "stands by its reporting and plans to appeal the decision." Following this, several other outlets such as [Lawfare](https://www.lawfaremedia.org/) also [removed some of their secondary reporting](https://www.lawfaremedia.org/article/the-hack-for-hire-industry-death-by-a-thousand-cuts-when-theft-doesn t-work-troll) and the Reuters article has been [removed from the Internet Archive](http://web.archive.org/web/20240000000000/https://www.reuters.com/investigates/special-report/usa-hackers-appin/), effectively censoring the information. &gt; &gt; In response to the unacceptable censorship by Appin and the Indian courts, Distributed Denial of Secrets is launching a new initiative to combat censorship: the Greenhouse Project. The Greenhouse Project continues DDoSecrets  mission of ensuring the free transmission of data in the public interest by making the  publisher of last resort  concept [proposed by George Buchanan in 2007](https://dl.acm.org/doi/10.1145/1255175.1255274) a reality. By ensuring the reporting and source files are preserved, the Greenhouse Project [builds on previous efforts](https://theintercept.com/2023/04/11/los-angeles-lawsuit-lapd-headshots/) creating a "warming effect" to reverse the chilling effects of censorship. This is a mirror of DDoSecrets  archival work - you can download the source files behind the reporting via censorship-resistant torrent here. This data is copied here in the public interest, for both improved censorship resistance, as well as security researchers to take a deeper glimpse into alleged modern hack-for-hire services.</description>
<size>283024576</size>
</item><item>
<title>nasa-iotd-dataset-20240124</title>
<category>Dataset</category>
<infohash>c126144397c3cbaf014c66d3d244a06f1e05a3ad</infohash>
<guid>https://academictorrents.com/details/c126144397c3cbaf014c66d3d244a06f1e05a3ad</guid>
<link>https://academictorrents.com/details/c126144397c3cbaf014c66d3d244a06f1e05a3ad</link>
<description>All (accessible) Nasa Image Of The Day files up to 01/24/2024 Compiled by tytech038</description>
<size>1737058989</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 23-12</title>
<category>Dataset</category>
<infohash>0d0364f8433eb90b6e3276b7e150a37da8e4a12b</infohash>
<guid>https://academictorrents.com/details/0d0364f8433eb90b6e3276b7e150a37da8e4a12b</guid>
<link>https://academictorrents.com/details/0d0364f8433eb90b6e3276b7e150a37da8e4a12b</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>73519373268</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2023-12</title>
<category>Dataset</category>
<infohash>9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</infohash>
<guid>https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</guid>
<link>https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4</link>
<description>Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift</description>
<size>2518779952359</size>
</item><item>
<title>NODE21</title>
<category>Dataset</category>
<infohash>c9ba503e3427c2867c46fe91174e2af88452d969</infohash>
<guid>https://academictorrents.com/details/c9ba503e3427c2867c46fe91174e2af88452d969</guid>
<link>https://academictorrents.com/details/c9ba503e3427c2867c46fe91174e2af88452d969</link>
<description>This dataset is provided for NODE21 public challenge. Node21 dataset consists of frontal chest radiographs with annotated bounding boxes around nodules. It consists of 4882 frontal chest radiographs, where 1134 CXR images (1476 nodules) are annotated with bounding boxes around nodules and the remaining 3748 images are free of nodules hence representing the negative class. The images in this set are from public datasets that allow us to remix and redistribute. They come from the following sources: - JSRT [1] - PadChest [2] - Chestx-ray14 [3] - Open-I [4] The annotations were provided by our chest radiologists. We provide both original and preprocessed versions of the dataset. Further, for the generation track, we provide a public set of NODE21 CT patches. These are patches of nodules from CT scans, originate from the LUNA16 dataset [5][6] . For more detailed descriptions of the data, please refer to the challenge website: NODE21 [1] Shiraishi, J., Katsuragawa, S., Ikezoe, J., Matsumoto, T., Kobayashi, T., Komatsu, K.i., Matsui, M., Fujita, H., Kodera, Y., Doi, K., 2000. Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. American Journal of Roentgenology 174, 71–74. doi:10.2214/ajr.174.1.1740071. [2] Bustos, A., Pertusa, A., Salinas, J.M., de la Iglesia-Vaya, M., 2020. PadChest: ´ A large chest x-ray image dataset with multi-label annotated reports. Medical Image Analysis 66, 101797. doi:10.1016/j.media.2020.101797. [3] Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M., 2017b. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases, in IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106. doi:10.1109/cvpr.2017.369. [4] Demner-Fushman, D., Antani, S., Simpson, M., Thoma, G.R., 2012. Design and Development of a Multimodal Biomedical Information Retrieval System. Journal of Computing Science and Engineering 6, 168–177. doi:10.5626/JCSE.2012.6.2.168. [5] Andrey Fedorov, Matthew Hancock, David Clunie, Mathias Brochhausen, Jonathan Bona, Justin Kirby, John Freymann, Steve Pieper, Hugo Aerts, Ron Kikinis1, Fred Prior, 2019. Standardized representation of the LIDC annotations using DICOM. The Cancer Imaging Archive. doi: 10.7937/TCIA.2018.H7UMFURQ [6] Setio et al., Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images:: The LUNA16 challenge, Medical Image Analysis 42, doi:: 10.1016/j.media.2017.06.015</description>
<size>37171954329</size>
</item><item>
<title>Dump of Wikidata of January 1st, 2024.</title>
<category>Dataset</category>
<infohash>0852ef544a4694995fcbef7132477c688ded7d9a</infohash>
<guid>https://academictorrents.com/details/0852ef544a4694995fcbef7132477c688ded7d9a</guid>
<link>https://academictorrents.com/details/0852ef544a4694995fcbef7132477c688ded7d9a</link>
<description>Wikidata: A Free and Open Knowledge Base Accessible: Readable and editable by humans and machines alike. Central Hub: Serves as the core storage for structured data across Wikimedia s sister projects, such as Wikipedia, Wikivoyage, Wiktionary, Wikisource, and more. Current Edition: This torrent represents an unofficial dump of Wikidata as of January 1st, 2024.</description>
<size>130532238059</size>
</item><item>
<title>RC_2015-01.bz2</title>
<category>Paper</category>
<infohash>32916ad30ce4c90ee4c47a95bd0075e44ac15dd2</infohash>
<guid>https://academictorrents.com/details/32916ad30ce4c90ee4c47a95bd0075e44ac15dd2</guid>
<link>https://academictorrents.com/details/32916ad30ce4c90ee4c47a95bd0075e44ac15dd2</link>
<description/>
<size>5452413560</size>
</item><item>
<title>DENTEX_CHALLENGE</title>
<category>Dataset</category>
<infohash>6b44adbe4c64591e859b57ffe04c091cf6cfd946</infohash>
<guid>https://academictorrents.com/details/6b44adbe4c64591e859b57ffe04c091cf6cfd946</guid>
<link>https://academictorrents.com/details/6b44adbe4c64591e859b57ffe04c091cf6cfd946</link>
<description>https://i.imgur.com/sAXITsB.png The DENTEX dataset comprises panoramic dental X-rays obtained from three different institutions using standard clinical conditions but varying equipment and imaging protocols, resulting in diverse image quality reflecting heterogeneous clinical practice. The dataset includes X-rays from patients aged 12 and above, randomly selected from the hospital s database to ensure patient privacy and confidentiality. To enable effective use of the FDI system, the dataset is hierarchically organized into three types of data; (a) 693 X-rays labeled for quadrant detection and quadrant classes only, (b) 634 X-rays labeled for tooth detection with quadrant and tooth enumeration classes, (c) 1005 X-rays fully labeled for abnormal tooth detection with quadrant, tooth enumeration, and diagnosis classes. The diagnosis class includes four specific categories: caries, deep caries, periapical lesions, and impacted teeth. An additional 1571 unlabeled X-rays are provided for pre-training. ## Data Split for Evaluation and Training The DENTEX 2023 dataset comprises three types of data: (a) partially annotated quadrant data, (b) partially annotated quadrant-enumeration data, and (c) fully annotated quadrant-enumeration-diagnosis data. The first two types of data are intended for training and development purposes, while the third type is used for training and evaluations. To comply with standard machine learning practices, the fully annotated third dataset, consisting of 1005 panoramic X-rays, is partitioned into training, validation, and testing subsets, comprising 705, 50, and 250 images, respectively. Ground truth labels are provided only for the training data, while the validation data is provided without associated ground truth, and the testing data is kept hidden from participants. Participants are allowed to use additional public data for augmenting the provided DENTEX dataset or for pre-training models on such datasets to enhance performance. However, they must ensure that all the data they use is publicly available. Additionally, they must document the use of external data clearly in their final short paper submission, providing details on the dataset and its source. ## Annotation Protocol The DENTEX provides three hierarchically annotated datasets that facilitate various dental detection tasks: (1) quadrant-only for quadrant detection, (2) quadrant-enumeration for tooth detection, and (3) quadrant-enumeration-diagnosis for abnormal tooth detection. Although it may seem redundant to provide a quadrant detection dataset, it is crucial for utilizing the FDI Numbering System. The FDI system is a globally-used system that assigns each quadrant of the mouth a number from 1 through 4. The top right is 1, the top left is 2, the bottom left is 3, and the bottom right is 4. Then each of the eight teeth and each molar are numbered 1 through 8. The 1 starts at the front middle tooth, and the numbers rise the farther back we go. So for example, the back tooth on the lower left side would be 48 according to FDI notation, which means quadrant 4, number 8. Therefore, the quadrant segmentation dataset can significantly simplify the dental enumeration task, even though evaluations will be made only on the fully annotated third data. All annotations in the DENTEX dataset are meticulously crafted by a team of dental experts. Specifically, each image is annotated by a last-year dental student, and the annotations are further verified and corrected by one of three expert dentists with over 15 years of experience. Therefore, the annotated data in DENTEX is of the highest quality and accuracy, which makes it a valuable resource for dental research.</description>
<size>10927391529</size>
</item><item>
<title>English Wiktionary parsed enwikt20231001</title>
<category>Dataset</category>
<infohash>d0c67ff5bdba2ca1b1f20ec756d554bc85142537</infohash>
<guid>https://academictorrents.com/details/d0c67ff5bdba2ca1b1f20ec756d554bc85142537</guid>
<link>https://academictorrents.com/details/d0c67ff5bdba2ca1b1f20ec756d554bc85142537</link>
<description>Parsed English Wiktionary SQL dump (MySQL): enwikt20231001_parsed.sql.7z (251 Mb). English Wiktionary source database: enwikt20231001.sql.7z (1.6 Gb). 27 log files with errors generated by parser during parsing of the English Wiktionary. English Wiktionary (https://en.wiktionary.org) was parsed by the Wikokit software. The Wiktionary parser source code (Java) and documentation are available at GitHub: https://github.com/componavt/wikokit. Unpack the dump "7z e enwikt20231001_parsed.sql.7z" Import this unpacked SQL file to the MySQL database: mysql$ CREATE DATABASE enwikt20231001_parsed; mysql$ USE enwikt20231001_parsed mysql$ SOURCE /path/enwikt20231001_parsed.sql</description>
<size>1962409984</size>
</item><item>
<title>VerSe'20 CT Dataset</title>
<category>Dataset</category>
<infohash>0ac07fd4ddf1802208f88c61c5ccf7d029d87a18</infohash>
<guid>https://academictorrents.com/details/0ac07fd4ddf1802208f88c61c5ccf7d029d87a18</guid>
<link>https://academictorrents.com/details/0ac07fd4ddf1802208f88c61c5ccf7d029d87a18</link>
<description>VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images ## What is VerSe? Spine or vertebral segmentation is a crucial step in all applications regarding automated quantification of spinal morphology and pathology. With the advent of deep learning, for such a task on computed tomography (CT) scans, a big and varied data is a primary sought-after resource. However, a large-scale, public dataset is currently unavailable. We believe *VerSe* can help here. VerSe is a large scale, multi-detector, multi-site, CT spine dataset consisting of 374 scans from 355 patients. The challenge was held in two iterations in conjunction with MICCAI 2019 and 2020. The tasks evaluated for include: vertebral labelling and segmentation. ## Citing VerSe If you use VerSe, we would appreciate references to the following papers. 1. **Sekuboyina A et al., VerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images, 2021.**&lt;br /&gt;In Medical Image Analysis: https://doi.org/10.1016/j.media.2021.102166&lt;br /&gt;Pre-print: https://arxiv.org/abs/2001.09193 2. **Löffler M et al., A Vertebral Segmentation Dataset with Fracture Grading. Radiology: Artificial Intelligence, 2020.**&lt;br /&gt;In Radiology AI: https://doi.org/10.1148/ryai.2020190138 3. **Liebl H and Schinz D et al., A Computed Tomography Vertebral Segmentation Dataset with Anatomical Variations and Multi-Vendor Scanner Data, 2021.**&lt;br /&gt;Pre-print: https://arxiv.org/pdf/2103.06360.pdf ## Data * The dataset has four files corresponding to one data sample: image, segmentation mask, centroid annotations, a PNG overview of the annotations. * Data structure - 01_training - Train data - 02_validation - (Formerly) PUBLIC test data - 03_test - (Formerly) HIDDEN test data * Sub-directory-based arrangement for each patient. File names are constructed of entities, a suffix and a file extension following the conventions of the Brain Imaging Data Structure (BIDS; https://bids.neuroimaging.io/)     Example: &amp;mdash;&amp;mdash;&amp;mdash;- training/rawdata/sub-verse000 sub-verse000_dir-orient_ct.nii.gz - CT image series training/derivatives/sub-verse000/ sub-verse000_dir-orient_seg-vert_msk.nii.gz - Segmentation mask of the vertebrae sub-verse000_dir-orient_seg-subreg_ctd.json - Centroid coordinates in image space sub-verse000_dir-orient_seg-vert_snp.png - Preview reformations of the annotated CT data.     * Centroid coordinates of the subject based structure (.json file) are given in voxels in the image space.  label  corresponds to the vertebral label: - 1-7: cervical spine: C1-C7 - 8-19: thoracic spine: T1-T12 - 20-25: lumbar spine: L1-L6 - 26: sacrum - not labeled in this dataset - 27: cocygis - not labeled in this dataset - 28: additional 13th thoracic vertebra, T13</description>
<size>38678870472</size>
</item><item>
<title>pubmed-baseline-update-data-12132023</title>
<category>Dataset</category>
<infohash>ef05353ca25232b5b3b043f0dd887456397701e2</infohash>
<guid>https://academictorrents.com/details/ef05353ca25232b5b3b043f0dd887456397701e2</guid>
<link>https://academictorrents.com/details/ef05353ca25232b5b3b043f0dd887456397701e2</link>
<description>PubMed baseline and update data snapshot from 12/13/2023. uploaded by tytech038 (http://tytech038.com) please be sure to abide by the terms and conditions (https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/README.txt)</description>
<size>59368035357</size>
</item><item>
<title>ALiEM ReSCu Peds Simulation eBook</title>
<category>Paper</category>
<infohash>d496e3c11633379f8166fdc4244dbe46771c7934</infohash>
<guid>https://academictorrents.com/details/d496e3c11633379f8166fdc4244dbe46771c7934</guid>
<link>https://academictorrents.com/details/d496e3c11633379f8166fdc4244dbe46771c7934</link>
<description>A compendium of 16 peer-reviewed, simulation cases as a standardized national pediatric curriculum for all emergency medicine (EM) residency programs, based on high-priority pediatric-specific content. Intended as pediatric training for Emergency Medicine Residents.</description>
<size>24841129</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 23-11</title>
<category>Dataset</category>
<infohash>425b791647cdb2752f921351828452ca8e09aef8</infohash>
<guid>https://academictorrents.com/details/425b791647cdb2752f921351828452ca8e09aef8</guid>
<link>https://academictorrents.com/details/425b791647cdb2752f921351828452ca8e09aef8</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>71218387017</size>
</item><item>
<title>Reddit comments/submissions 2023-11</title>
<category>Dataset</category>
<infohash>aee7728b787892d3cce4d6df3c86c2728e2be1d7</infohash>
<guid>https://academictorrents.com/details/aee7728b787892d3cce4d6df3c86c2728e2be1d7</guid>
<link>https://academictorrents.com/details/aee7728b787892d3cce4d6df3c86c2728e2be1d7</link>
<description>Reddit comments and submissions from 2023-11 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/edit/89d24ff9d5fbc1efcdaf9d7689d72b7548f699fc</description>
<size>44688495052</size>
</item><item>
<title>CelebV-HQ</title>
<category>Dataset</category>
<infohash>843b5adb0358124d388c4e9836654c246b988ff4</infohash>
<guid>https://academictorrents.com/details/843b5adb0358124d388c4e9836654c246b988ff4</guid>
<link>https://academictorrents.com/details/843b5adb0358124d388c4e9836654c246b988ff4</link>
<description>Large-scale datasets have played indispensable roles in the recent success of face generation/editing and significantly facilitated the advances of emerging research fields. However, the academic community still lacks a video dataset with diverse facial attribute annotations, which is crucial for the research on face-related videos. In this work, we propose a large-scale, high-quality, and diverse video dataset with rich facial attribute annotations, named the High-Quality Celebrity Video Dataset (CelebV-HQ). CelebV-HQ contains 35,666 video clips with the resolution of 512x512 at least, involving 15,653 identities. All clips are labeled manually with 83 facial attributes, covering appearance, action, and emotion. We conduct a comprehensive analysis in terms of age, ethnicity, brightness stability, motion smoothness, head pose diversity, and data quality to demonstrate the diversity and temporal coherence of CelebV-HQ. Besides, its versatility and potential are validated on two representative tasks, i.e., unconditional video generation and video facial attribute editing. Furthermore, we envision the future potential of CelebV-HQ, as well as the new opportunities and challenges it would bring to related research directions.</description>
<size>41393683493</size>
</item><item>
<title>mixtral-8x7b-32kseqlen</title>
<category>Dataset</category>
<infohash>5546272da9065eddeb6fcd7ffddeef5b75be79a7</infohash>
<guid>https://academictorrents.com/details/5546272da9065eddeb6fcd7ffddeef5b75be79a7</guid>
<link>https://academictorrents.com/details/5546272da9065eddeb6fcd7ffddeef5b75be79a7</link>
<description>▄▄▄░░ ▄▄▄▄▄█████████░░░░ ▄▄▄▄▄▄████████████████████░░░░░ █████████████████████████████░░░░░ ▄▄▄▄▄▄█████░░░       █████████████████████████████░░░░░ ▄▄▄▄▄██████████████████░░░░░░  ██████████████████████████████░░░░░ ▄█████████████████████████████░░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░███████████████████████████████░░░░░ ████████████████████████████████░░░░░███████████████████████████████░░░░░ ████████████████████████████████░░░░████████████████████████████████░░░░░ █████████████████████████████████░░░████████████████████████████████░░░░░ █████████████████████████████████░░░████████████░███████████████████░░░░░ ██████████████████████████████████░█████████████░███████████████████░░░░░ ███████████████████░██████████████▄█████████████░███████████████████░░░░░ ███████████████████░███████████████████████████░░███████████████████░░░░░ ███████████████████░░██████████████████████████░░███████████████████░░░░░ ███████████████████░░█████████████████████████░░░███████████████████░░░░░ ███████████████████░░░████████████████████████░░░███████████████████░░░░░ ███████████████████░░░████████████████████████░░░███████████████████░░░░░ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░░█████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░░████████████████████░░░░░███████████████████░░░░░ ███████████████████░░░░░░███████████████████░░░░░███████████████████░░░░░ ███████████████████░░░░░░██████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░░███████████████░░░░░░░██████████░░░░░░░░░░░░░░ ███████████████████░░░░░░░░███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████████░░░░░░░░███████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████████░░░░░░░░░██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  ░░░░░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░      ░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░    ░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░ ░░░░░ ╓────────────────────────────────────────────────────────────────────────────╖ ║                      MISTRAL AI - 8x7B - v0.1  08/12/23                    ║ ╙────────────────────────────────────────────────────────────────────────────╜ ╓────────────────────────────────────────────────────────────────────────────╖ ║                                                                            ║ ║                               ·· md5sum ··                                 ║ ║                                                                            ║ ║        1faa9bc9b20fcfe81fcd4eb7166a79e6  consolidated.00.pth               ║ ║        37974873eb68a7ab30c4912fc36264ae  tokenizer.model                   ║ ╙────────────────────────────────────────────────────────────────────────────╜ ╓────────────────────────────────────────────────────────────────────────────╖ ║                                                                            ║ ║                    ·· Released by the Mistral AI team ··                   ║ ║                                                                            ║ ║                 Albert, Alexandre, Arthur, Blanche, Bam4d,                 ║ ║             Devendra, Diego, Emma, Florian, Gianna, Guillaume,             ║ ║            Guillaume, Lélio, Louis, Lucile, Marie-Anne, Pierre,            ║ ║              Teven, Theo, Thibaut, Thomas, Timothée, William               ║ ║                                                                            ║ ╙────────────────────────────────────────────────────────────────────────────╜</description>
<size>93406273410</size>
</item><item>
<title>PAX-Ray++ dataset</title>
<category>Dataset</category>
<infohash>9c0c157394f33376012516ba2ee2072187b10175</infohash>
<guid>https://academictorrents.com/details/9c0c157394f33376012516ba2ee2072187b10175</guid>
<link>https://academictorrents.com/details/9c0c157394f33376012516ba2ee2072187b10175</link>
<description>https://i.imgur.com/TMpYiL9.png Purpose: Interpreting chest radiographs (CXR) remains challenging due to the ambiguity of overlapping structures such as the lungs, heart, and bones. To address this issue, we propose a novel method for extracting fine-grained anatomical structures in CXR using pseudo-labeling of three-dimensional computed tomography (CT) scans. Methods: We created a large-scale dataset of 10,021 thoracic CTs with 157 labels and applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels. These labels were projected onto a two-dimensional plane, similar to the CXR, allowing the training of detailed semantic segmentation models for CXR without any manual annotation effort. Results: Our resulting segmentation models demonstrated remarkable performance on CXR, with a high average model-annotator agreement between two radiologists with mIoU scores of 0.93 and 0.85 for frontal and lateral anatomy, while inter-annotator agreement remained at 0.95 and 0.83 mIoU. Our anatomical segmentations allowed for the accurate extraction of relevant explainable medical features such as the cardio-thoracic-ratio. Conclusion: Our method of volumetric pseudo-labeling paired with CT projection offers a promising approach for detailed anatomical segmentation of CXR with a high agreement with human annotators. This technique may have important clinical implications, particularly in the analysis of various thoracic pathologies.</description>
<size>3852576886</size>
</item><item>
<title>Raw and processed data in paper entitled "Deep learning-based approach for high spatial resolution fiber shape sensing"</title>
<category>Dataset</category>
<infohash>c7fda3aebede0dd6fa2b4529814f36da806869e6</infohash>
<guid>https://academictorrents.com/details/c7fda3aebede0dd6fa2b4529814f36da806869e6</guid>
<link>https://academictorrents.com/details/c7fda3aebede0dd6fa2b4529814f36da806869e6</link>
<description>Raw and processed data used in the paper entitled, Deep learning-based approach for high spatial resolution fiber shape sensing, published in the Journal of Communications Engineering. The code for processing the raw data is available in the link provided under the “code availability” section of the paper. Alternative download link: https://zenodo.org/records/13293929</description>
<size>3597456207</size>
</item><item>
<title>2018LA_Seg.zip</title>
<category>Dataset</category>
<infohash>5819b4abdb0e8ea08eac0eafe35ee41ff92fdcc2</infohash>
<guid>https://academictorrents.com/details/5819b4abdb0e8ea08eac0eafe35ee41ff92fdcc2</guid>
<link>https://academictorrents.com/details/5819b4abdb0e8ea08eac0eafe35ee41ff92fdcc2</link>
<description>Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world s largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field.</description>
<size>226034815</size>
</item><item>
<title>Early release of the expanded atlas of the sky in continuous gravitational waves</title>
<category>Dataset</category>
<infohash>2902f85dba08dc2d429e26f0a99b652896c3cc85</infohash>
<guid>https://academictorrents.com/details/2902f85dba08dc2d429e26f0a99b652896c3cc85</guid>
<link>https://academictorrents.com/details/2902f85dba08dc2d429e26f0a99b652896c3cc85</link>
<description>Early release of the expanded atlas of the sky in continuous gravitational waves https://www.atlas.aei.uni-hannover.de/work/volodya/O3a_2_atlas/</description>
<size>1065017965920</size>
</item><item>
<title>NORD Osmosis Videos 2023</title>
<category>Course</category>
<infohash>c5632854d1f5aa3740003cd4cf5b8b90d4851584</infohash>
<guid>https://academictorrents.com/details/c5632854d1f5aa3740003cd4cf5b8b90d4851584</guid>
<link>https://academictorrents.com/details/c5632854d1f5aa3740003cd4cf5b8b90d4851584</link>
<description/>
<size>713425200</size>
</item><item>
<title>Reddit comments/submissions 2023-10</title>
<category>Dataset</category>
<infohash>9a3f77cf1b16f064b8f82e75ee8d470b49c90512</infohash>
<guid>https://academictorrents.com/details/9a3f77cf1b16f064b8f82e75ee8d470b49c90512</guid>
<link>https://academictorrents.com/details/9a3f77cf1b16f064b8f82e75ee8d470b49c90512</link>
<description>Reddit comments and submissions from 2023-10 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps Previous months can be found here https://academictorrents.com/edit/89d24ff9d5fbc1efcdaf9d7689d72b7548f699fc</description>
<size>45413138899</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps, 23-10</title>
<category>Dataset</category>
<infohash>52e18b6a61f243e6ae42a1f2fc8aaf9fd9c9dbdb</infohash>
<guid>https://academictorrents.com/details/52e18b6a61f243e6ae42a1f2fc8aaf9fd9c9dbdb</guid>
<link>https://academictorrents.com/details/52e18b6a61f243e6ae42a1f2fc8aaf9fd9c9dbdb</link>
<description>https://github.com/ArthurHeitmann/arctic_shift Previous dumps of the blocks files can be found here https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</description>
<size>81124976574</size>
</item><item>
<title>React &amp; TypeScript Chrome Extension Development [2023]</title>
<category>Course</category>
<infohash>bd13646530182a285ddca32f1ad61d0e04a63b2f</infohash>
<guid>https://academictorrents.com/details/bd13646530182a285ddca32f1ad61d0e04a63b2f</guid>
<link>https://academictorrents.com/details/bd13646530182a285ddca32f1ad61d0e04a63b2f</link>
<description/>
<size>4520411136</size>
</item><item>
<title>Stack Overflow data dump 2022-06</title>
<category>Dataset</category>
<infohash>7210f09cc2d2e63a15663981f384fe21702b1456</infohash>
<guid>https://academictorrents.com/details/7210f09cc2d2e63a15663981f384fe21702b1456</guid>
<link>https://academictorrents.com/details/7210f09cc2d2e63a15663981f384fe21702b1456</link>
<description>Stack Overflow 2022-06 data dump in a SQL Server database # Stack Overflow SQL Server Database - 2022-06 Version For more information and the latest release: https://www.brentozar.com/go/querystack Imported from the Stack Exchange Data Dump as of June 2022: https://archive.org/details/stackexchange Imported using the Stack Overflow Data Dump Importer: https://github.com/BrentOzarULTD/soddi This database is in Microsoft SQL Server 2016 format, which means you can attach it to any SQL Server 2016 or newer instance. To keep the size small but let you get started fast: * All tables have a clustered index with page compression on * No nonclustered or full text indexes are included * The log file is small, and you should grow it out if you plan to modify data * It s distributed as an mdf/ldf so you don t need space to restore it * It only includes StackOverflow.com data, not data for other Stack sites As with the original data dump, this is provided under cc-by-sa 4.0 license: https://creativecommons.org/licenses/by-sa/4.0/ You are free to share this database and adapt it for any purpose, even commercially, but you must attribute it to the original authors: https://archive.org/details/stackexchange</description>
<size>59345626171</size>
</item><item>
<title>Open Osmosis Videos (2015-2018)</title>
<category>Course</category>
<infohash>4f491f0e13d763408ab6ff87cf7a8ddc48228806</infohash>
<guid>https://academictorrents.com/details/4f491f0e13d763408ab6ff87cf7a8ddc48228806</guid>
<link>https://academictorrents.com/details/4f491f0e13d763408ab6ff87cf7a8ddc48228806</link>
<description>These are the Open Osmosis videos which have been released under the CC-BY-SA 4.0 license; the videos were created in a collaboration with WikiProject Medicine and are available on Wikimedia Commons.</description>
<size>5214924068</size>
</item><item>
<title>Informatics an ecological glimpse</title>
<category>Paper</category>
<infohash>46e5253b8c8ff0c5adb0cb9a014ff0696419de56</infohash>
<guid>https://academictorrents.com/details/46e5253b8c8ff0c5adb0cb9a014ff0696419de56</guid>
<link>https://academictorrents.com/details/46e5253b8c8ff0c5adb0cb9a014ff0696419de56</link>
<description>This paper makes an insight into the connection between the human and informatics with the aim of acquiring a thought capable of responding to the current needs in the world by demonstrating the link between information technology and health. By doing so, a practical case of the use of information technology in agriculture and e-commerce on a case study is analyzed. An argumentative discussion is also held, coming to the conclusion that ecological thought is ambitious since it gives the focus on biological solutions to informatics and that leads to question forms of algorithmic efficiency in the ecosphere in the hope of a better understanding of nature.</description>
<size>1315840</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2023-09</title>
<category>Dataset</category>
<infohash>89d24ff9d5fbc1efcdaf9d7689d72b7548f699fc</infohash>
<guid>https://academictorrents.com/details/89d24ff9d5fbc1efcdaf9d7689d72b7548f699fc</guid>
<link>https://academictorrents.com/details/89d24ff9d5fbc1efcdaf9d7689d72b7548f699fc</link>
<description>Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev. These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>2382602941228</size>
</item><item>
<title>Russian Wiktionary parsed ruwikt20230901</title>
<category>Dataset</category>
<infohash>df5f4f51a50d6ff24f5ee748a7290ae3c490eaac</infohash>
<guid>https://academictorrents.com/details/df5f4f51a50d6ff24f5ee748a7290ae3c490eaac</guid>
<link>https://academictorrents.com/details/df5f4f51a50d6ff24f5ee748a7290ae3c490eaac</link>
<description>Parsed Russian Wiktionary SQL dump (MySQL): ruwikt20230901_parsed.sql.7z (92 Mb). Russian Wiktionary source database: ruwikt20230901.sql.7z (416 Mb). 13 log files with errors generated by parser during parsing of the Russian Wiktionary. Russian Wiktionary (https://ru.wiktionary.org) was parsed by the Wikokit software. The Wiktionary parser source code (Java) and documentation are available at GitHub: https://github.com/componavt/wikokit. Unpack the dump "7z e ruwikt20230901_parsed.sql.7z" Import this unpacked SQL file to the MySQL database: mysql$ CREATE DATABASE ruwikt20230901_parsed; mysql$ USE ruwikt20230901_parsed mysql$ SOURCE /path/ruwikt20230901_parsed.sql</description>
<size>540540928</size>
</item><item>
<title>Mirror of RaiderBDev's reddit dumps</title>
<category>Dataset</category>
<infohash>7810d20b3651c0060cb670032ec33818230f654d</infohash>
<guid>https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</guid>
<link>https://academictorrents.com/details/7810d20b3651c0060cb670032ec33818230f654d</link>
<description>https://github.com/ArthurHeitmann/arctic_shift</description>
<size>443697396741</size>
</item><item>
<title>Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation</title>
<category>Dataset</category>
<infohash>8277ce3d862883f08846d87099e3af4d89fd94c1</infohash>
<guid>https://academictorrents.com/details/8277ce3d862883f08846d87099e3af4d89fd94c1</guid>
<link>https://academictorrents.com/details/8277ce3d862883f08846d87099e3af4d89fd94c1</link>
<description>AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. https://zenodo.org/record/7155725</description>
<size>24234336519</size>
</item><item>
<title>TotalSegmentator CT Dataset V2</title>
<category>Dataset</category>
<infohash>1dfeb3186514b40a2c212c21d494c665766bfbf4</infohash>
<guid>https://academictorrents.com/details/1dfeb3186514b40a2c212c21d494c665766bfbf4</guid>
<link>https://academictorrents.com/details/1dfeb3186514b40a2c212c21d494c665766bfbf4</link>
<description>Info: This is version 2 of the TotalSegmentator dataset. In 1228 CT images we segmented 117 anatomical structures covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions. https://zenodo.org/record/8367088</description>
<size>23586975073</size>
</item><item>
<title>mistral-7B-v0.1</title>
<category>Dataset</category>
<infohash>208b101a0f51514ecf285885a8b0f6fb1a1e4d7d</infohash>
<guid>https://academictorrents.com/details/208b101a0f51514ecf285885a8b0f6fb1a1e4d7d</guid>
<link>https://academictorrents.com/details/208b101a0f51514ecf285885a8b0f6fb1a1e4d7d</link>
<description>Mistral 7B is a 7.3B parameter model that: - Outperforms Llama 2 13B on all benchmarks - Outperforms Llama 1 34B on many benchmarks - Approaches CodeLlama 7B performance on code, while remaining good at English tasks - Uses Grouped-query attention (GQA) for faster inference - Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost - We’re releasing Mistral 7B under the Apache 2.0 license, it can be used without restrictions.     ▄▄▄░░ ▄▄▄▄▄█████████░░░░ ▄▄▄▄▄▄████████████████████░░░░░ █████████████████████████████░░░░░ ▄▄▄▄▄▄█████░░░       █████████████████████████████░░░░░ ▄▄▄▄▄██████████████████░░░░░░  ██████████████████████████████░░░░░ ▄█████████████████████████████░░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░░██████████████████████████████░░░░░ ███████████████████████████████░░░░░░███████████████████████████████░░░░░ ████████████████████████████████░░░░░███████████████████████████████░░░░░ ████████████████████████████████░░░░████████████████████████████████░░░░░ █████████████████████████████████░░░████████████████████████████████░░░░░ █████████████████████████████████░░░████████████░███████████████████░░░░░ ██████████████████████████████████░█████████████░███████████████████░░░░░ ███████████████████░██████████████▄█████████████░███████████████████░░░░░ ███████████████████░███████████████████████████░░███████████████████░░░░░ ███████████████████░░██████████████████████████░░███████████████████░░░░░ ███████████████████░░█████████████████████████░░░███████████████████░░░░░ ███████████████████░░░████████████████████████░░░███████████████████░░░░░ ███████████████████░░░████████████████████████░░░███████████████████░░░░░ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░██████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░░█████████████████████░░░░███████████████████░░░░░ ███████████████████░░░░░████████████████████░░░░░███████████████████░░░░░ ███████████████████░░░░░░███████████████████░░░░░███████████████████░░░░░ ███████████████████░░░░░░██████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░█████████████████░░░░░░███████████████████░░░░░ ███████████████████░░░░░░░░███████████████░░░░░░░██████████░░░░░░░░░░░░░░ ███████████████████░░░░░░░░███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████████░░░░░░░░███████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████████░░░░░░░░░██░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ███████████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ██████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  ░░░░░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░      ░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░    ░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░░░░░░░░░░░░ ░░░░░░░░░░░░░░░░░ ░░░░░ ╓────────────────────────────────────────────────────────────────────────────╖ ║                      MISTRAL AI - 7B - v0.1  27/09/23                      ║ ╙────────────────────────────────────────────────────────────────────────────╜ ╓────────────────────────────────────────────────────────────────────────────╖ ║                                                                            ║ ║                               ·· md5sum ··                                 ║ ║                                                                            ║ ║        e2e37b12c741eb0782697362e40a262a  consolidated.00.pth               ║ ║        37974873eb68a7ab30c4912fc36264ae  tokenizer.model                   ║ ╙────────────────────────────────────────────────────────────────────────────╜ ╓────────────────────────────────────────────────────────────────────────────╖ ║                                                                            ║ ║                    ·· Released by the Mistral AI team ··                   ║ ║                                                                            ║ ║          Albert, Alexandre, Arthur, Bam4d, Devendra, Diego, Florian,       ║ ║         Gianna, Guillaume, Lélio, Lucas, Lucile, Marie-Anne, Pierre,       ║ ║              Raphaël, Teven, Thibaut, Thomas, Timothée, William            ║ ║                                                                            ║ ╙────────────────────────────────────────────────────────────────────────────╜</description>
<size>14484008451</size>
</item><item>
<title>Domestika - Digital Fantasy Portraits with Photoshop</title>
<category>Paper</category>
<infohash>c8811876ce3d597af96411d3614aba5f92aba60d</infohash>
<guid>https://academictorrents.com/details/c8811876ce3d597af96411d3614aba5f92aba60d</guid>
<link>https://academictorrents.com/details/c8811876ce3d597af96411d3614aba5f92aba60d</link>
<description/>
<size>12830375936</size>
</item><item>
<title>Edexcel, Cambridge, AQA, IB Textbooks</title>
<category>Paper</category>
<infohash>405397ea084c2ece0bb4e66af77bc122b934179d</infohash>
<guid>https://academictorrents.com/details/405397ea084c2ece0bb4e66af77bc122b934179d</guid>
<link>https://academictorrents.com/details/405397ea084c2ece0bb4e66af77bc122b934179d</link>
<description>All the textbooks for Edexcel, Cambridge, AQA and IB for both GCSE and A Levels</description>
<size>7469006848</size>
</item><item>
<title>tbd_data</title>
<category>Dataset</category>
<infohash>e2d9634c15de74b2f6d199ff419ee2b560009bac</infohash>
<guid>https://academictorrents.com/details/e2d9634c15de74b2f6d199ff419ee2b560009bac</guid>
<link>https://academictorrents.com/details/e2d9634c15de74b2f6d199ff419ee2b560009bac</link>
<description/>
<size>8487</size>
</item><item>
<title>AlgoExpert - BlockchainExpert</title>
<category>Course</category>
<infohash>dde47215d885d6741144bfd23c78e066c8aa575b</infohash>
<guid>https://academictorrents.com/details/dde47215d885d6741144bfd23c78e066c8aa575b</guid>
<link>https://academictorrents.com/details/dde47215d885d6741144bfd23c78e066c8aa575b</link>
<description/>
<size>7523532800</size>
</item><item>
<title>OpenAlex Snapshot - 23-08-21 (Complete)</title>
<category>Dataset</category>
<infohash>c29c888839bf0512043c770e78ddbe321ea6567b</infohash>
<guid>https://academictorrents.com/details/c29c888839bf0512043c770e78ddbe321ea6567b</guid>
<link>https://academictorrents.com/details/c29c888839bf0512043c770e78ddbe321ea6567b</link>
<description>OpenAlex is a new, fully-open scientific knowledge graph (SKG), launched to replace the discontinued Microsoft Academic Graph (MAG). It contains metadata for 209M works (journal articles, books, etc); 2013M disambiguated authors; 124k venues (places that host works, such as journals and online repositories); 109k institutions; and 65k Wikidata concepts (linked to works via an automated hierarchical multi-tag classifier). The dataset is fully and freely available via a web-based GUI, a full data dump, and high-volume REST API. The resource is under active development and future work will improve accuracy and coverage of citation information and author/institution parsing and deduplication. From: Priem, J., Piwowar, H., &amp; Orr, R. (2022). OpenAlex: A fully-open index of scholarly works, authors, venues, institutions, and concepts. ArXiv. https://arxiv.org/abs/2205.01833 Upload details: Downloaded a copy from the aws endpoint s3://openalex on 2023-08-21. Updates are rolling, so future torrents can just be patches unless major revisions are issued.</description>
<size>335409170855</size>
</item><item>
<title>GPT-Neo model weights</title>
<category>Dataset</category>
<infohash>8e175f4ee419f1cb8c319327b48a5b6b48b77ca8</infohash>
<guid>https://academictorrents.com/details/8e175f4ee419f1cb8c319327b48a5b6b48b77ca8</guid>
<link>https://academictorrents.com/details/8e175f4ee419f1cb8c319327b48a5b6b48b77ca8</link>
<description>Pretrained GPT-Neo models trained on The Pile https://the-eye.eu/public/AI/gptneo-release/</description>
<size>45048309970</size>
</item><item>
<title>iPSC/SYTO24/PhalloidinAF568 on 96-well plate; computational quantitative phase.</title>
<category>Dataset</category>
<infohash>c95c06e98a74a580ccbcceafdc1188ea144021c8</infohash>
<guid>https://academictorrents.com/details/c95c06e98a74a580ccbcceafdc1188ea144021c8</guid>
<link>https://academictorrents.com/details/c95c06e98a74a580ccbcceafdc1188ea144021c8</link>
<description>Complete dataset of the brightfield, darkfield, digital quantitative phase, and widefield fluorescence z-stack images of iPSC culture on the 96-well plate, captured by the Caltech parallel ptychographic imaging system (codename: 96 Eyes). The cell culture is stained with SYTO24 and PhalloidinAF568 for the proof of concept. Also attached are the preview images in PNG format, as well as a minimal Python code to decode the raw data files. Published in: A.C.S. Chan, J Kim, A Pan, H Xu, D Nojima, C Hale, S Wang, C Yang, "Parallel Fourier ptychographic microscopy for high-throughput screening with 96 cameras (96 Eyes)" Scientific Reports 9, 11114 (2019). http://dx.doi.org/10.1038/s41598-019-47146-z Preprint: https://www.biorxiv.org/content/10.1101/547265v2 It is recommend that the raw pixels to be decoded on demand; otherwise it is not practical to load all &gt;20GB data to computer memory just to crop the images later. Read the attached Python script  decoded.py  to learn how to select the region of interest (ROI) in-place to reduce the disk read/write time. For Matlab users, on-demand pixel readout can be done with the  h5read  function. Try out the following pseudo-code, and debug accordingly to read all FPM raw pixels.    matlab</description>
<size>23019910196</size>
</item><item>
<title>LLaMA Weights</title>
<category>Dataset</category>
<infohash>b8287ebfa04f879b048d4d4404108cf3e8014352</infohash>
<guid>https://academictorrents.com/details/b8287ebfa04f879b048d4d4404108cf3e8014352</guid>
<link>https://academictorrents.com/details/b8287ebfa04f879b048d4d4404108cf3e8014352</link>
<description>https://github.com/facebookresearch/llama https://github.com/Elyah2035/llama-dl</description>
<size>235164840000</size>
</item><item>
<title>RAPPPID Pre-Trained Weights</title>
<category>Dataset</category>
<infohash>6c690018c5786dbbb00161f62b0712d69296df97</infohash>
<guid>https://academictorrents.com/details/6c690018c5786dbbb00161f62b0712d69296df97</guid>
<link>https://academictorrents.com/details/6c690018c5786dbbb00161f62b0712d69296df97</link>
<description>These are weights from the original RAPPPID paper and are meant to be used with the RAPPPID code base.</description>
<size>11549848</size>
</item><item>
<title>Data Structure and Algorithms Courses by Algoexpert and Neetcode</title>
<category>Course</category>
<infohash>524d780dce185b43cbe8315b161ea201460f32c0</infohash>
<guid>https://academictorrents.com/details/524d780dce185b43cbe8315b161ea201460f32c0</guid>
<link>https://academictorrents.com/details/524d780dce185b43cbe8315b161ea201460f32c0</link>
<description>This bundle contains most of the courses available in Data Structure and Algorithms by Algoexpert as well as Neetcode.These contains a curated list of algorithms and problems that are found in Leetcode.</description>
<size>29893144324</size>
</item><item>
<title>yahoo-groups</title>
<category>Dataset</category>
<infohash>b46563069b319e821b1b669696fe8b2b77eab5f7</infohash>
<guid>https://academictorrents.com/details/b46563069b319e821b1b669696fe8b2b77eab5f7</guid>
<link>https://academictorrents.com/details/b46563069b319e821b1b669696fe8b2b77eab5f7</link>
<description>This torrent contains a compressed parsed dump of the data collected by archiveteam when Yahoo! Groups shutdown.</description>
<size>1447387201536</size>
</item><item>
<title>RARBG torrent database</title>
<category>Dataset</category>
<infohash>a2ca83e177df5cb1966dfc1d262bc751e4987405</infohash>
<guid>https://academictorrents.com/details/a2ca83e177df5cb1966dfc1d262bc751e4987405</guid>
<link>https://academictorrents.com/details/a2ca83e177df5cb1966dfc1d262bc751e4987405</link>
<description>dynamic metainfo from client</description>
<size>393757022</size>
</item><item>
<title>LMR: A Large-Scale Multi-Reference Dataset for Reference-based Super-Resolution</title>
<category>Dataset</category>
<infohash>39424bb06d9172ac1c50fe4426eca51697bb4bfc</infohash>
<guid>https://academictorrents.com/details/39424bb06d9172ac1c50fe4426eca51697bb4bfc</guid>
<link>https://academictorrents.com/details/39424bb06d9172ac1c50fe4426eca51697bb4bfc</link>
<description>It is widely agreed that reference-based super-resolution (RefSR) achieves superior results by referring to similar high quality images, compared to single image super-resolution (SISR). Intuitively, the more references, the better performance. However, previous RefSR methods have all focused on single-reference image training, while multiple reference images are often available in testing or practical applications. The root cause of such training-testing mismatch is the absence of publicly available multi-reference SR training datasets, which greatly hinders research efforts on multi-reference super-resolution. To this end, we construct a large-scale, multi-reference super-resolution dataset, named LMR. It contains 112,142 groups of 300x300 training images, which is 10x of the existing largest RefSR dataset. The image size is also much larger. More importantly, each group is equipped with 5 reference images with different similarity levels. Furthermore, we propose a new baseline method for multi-reference super-resolution: MRefSR, including a Multi-Reference Attention Module (MAM) for feature fusion of an arbitrary number of reference images, and a Spatial Aware Filtering Module (SAFM) for the fused feature selection. The proposed MRefSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.</description>
<size>56048390273</size>
</item><item>
<title>Information maximization for few-shot learning</title>
<category>Paper</category>
<infohash>7ec731cf0f949f81db0a903d812bd591cea4edec</infohash>
<guid>https://academictorrents.com/details/7ec731cf0f949f81db0a903d812bd591cea4edec</guid>
<link>https://academictorrents.com/details/7ec731cf0f949f81db0a903d812bd591cea4edec</link>
<description>We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios,with domain shifts and larger numbers of classes.</description>
<size>3481426125</size>
</item><item>
<title>data.tar.gz</title>
<category>Paper</category>
<infohash>267d7c6cff11fdb4e20aa6e628931be881db67e6</infohash>
<guid>https://academictorrents.com/details/267d7c6cff11fdb4e20aa6e628931be881db67e6</guid>
<link>https://academictorrents.com/details/267d7c6cff11fdb4e20aa6e628931be881db67e6</link>
<description>We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios,with domain shifts and larger numbers of classes.</description>
<size>29886266981</size>
</item><item>
<title>Shape Sensing of Optical Fiber Bragg Gratings Based on Deep Learning</title>
<category>Dataset</category>
<infohash>33ebcf714a480206fa9f76359e48a0355a974757</infohash>
<guid>https://academictorrents.com/details/33ebcf714a480206fa9f76359e48a0355a974757</guid>
<link>https://academictorrents.com/details/33ebcf714a480206fa9f76359e48a0355a974757</link>
<description>The raw data used to train the designed deep learning model for shape prediction using eFBG fiber sensor. The Jupyter notebook used for loading and splitting the data is also included. Alternative download link: https://zenodo.org/records/13268037</description>
<size>202538289</size>
</item><item>
<title>ZeroToMastery - PyTorch for Deep Learning</title>
<category>Course</category>
<infohash>b51e3cc0cc631bde4b1fe866a6d171046873b181</infohash>
<guid>https://academictorrents.com/details/b51e3cc0cc631bde4b1fe866a6d171046873b181</guid>
<link>https://academictorrents.com/details/b51e3cc0cc631bde4b1fe866a6d171046873b181</link>
<description>## About Learn PyTorch from scratch! This PyTorch course is your step-by-step guide to developing your own deep learning models using PyTorch. You’ll learn Deep Learning with PyTorch by building a massive 3-part real-world milestone project. By the end, you’ll have the skills and portfolio to get hired as a Deep Learning Engineer. Learn PyTorch. Become a Deep Learning Engineer. Get Hired. ## Course Overview We can guarantee (with, like, 99.57% confidence) that this is the most comprehensive, modern, and up-to-date course you will find to learn PyTorch and the cutting-edge field of Deep Learning. Daniel takes you step-by-step from an absolute beginner to becoming a master of Deep Learning with PyTorch. ## What You’ll Learn – Everything from getting started with using PyTorch to building your own real-world models – Why PyTorch is a fantastic way to start working in machine learning – Understand how to integrate Deep Learning into tools and applications – Create and utilize machine learning algorithms just like you would write a Python program – Build and deploy your own custom trained PyTorch neural network accessible to the public – How to take data, build a ML algorithm to find patterns, and then use that algorithm as an AI to enhance your applications – Master deep learning and become a top candidate for recruiters seeking Deep Learning Engineers – To expand your Machine Learning and Deep Learning skills and toolkit – The skills you need to become a Deep Learning Engineer and get hired with a chance of making US$100,000+ / year ## What will this PyTorch course be like? This PyTorch course is very hands-on and project based. You won’t just be staring at your screen. We’ll leave that for other PyTorch tutorials and courses. In this course you’ll actually be: – Running experiments – Completing exercises to test your skills – Building real-world deep learning models and projects to mimic real life scenarios By the end of it all, you’ll have the skillset needed to identify and develop modern deep learning solutions that Big Tech companies encounter. ⚠ Fair warning: this course is very comprehensive. But don’t be intimidated, Daniel will teach you everything from scratch and step-by-step! ## Here’s what you’ll learn in this PyTorch course: 1. PyTorch Fundamentals — We start with the barebone fundamentals, so even if you’re a beginner you’ll get up to speed. In machine learning, data gets represented as a tensor (a collection of numbers). Learning how to craft tensors with PyTorch is paramount to building machine learning algorithms. In PyTorch Fundamentals we cover the PyTorch tensor datatype in-depth. 2. PyTorch Workflow — Okay, you’ve got the fundamentals down, and you’ve made some tensors to represent data, but what now? With PyTorch Workflow you’ll learn the steps to go from data -&gt; tensors -&gt; trained neural network model. You’ll see and use these steps wherever you encounter PyTorch code as well as for the rest of the course. 3. PyTorch Neural Network Classification — Classification is one of the most common machine learning problems. – Is something one thing or another? – Is an email spam or not spam? – Is credit card transaction fraud or not fraud? With PyTorch Neural Network Classification you’ll learn how to code a neural network classification model using PyTorch so that you can classify things and answer these questions. 4. PyTorch Computer Vision — Neural networks have changed the game of computer vision forever. And now PyTorch drives many of the latest advancements in computer vision algorithms. For example, Tesla use PyTorch to build the computer vision algorithms for their self-driving software. With PyTorch Computer Vision you’ll build a PyTorch neural network capable of seeing patterns in images of and classifying them into different categories. 5. PyTorch Custom Datasets — The magic of machine learning is building algorithms to find patterns in your own custom data. There are plenty of existing datasets out there, but how do you load your own custom dataset into PyTorch? This is exactly what you’ll learn with the PyTorch Custom Datasets section of this course. You’ll learn how to load an image dataset for FoodVision Mini: a PyTorch computer vision model capable of classifying images of pizza, steak and sushi (am I making you hungry to learn yet?!). We’ll be building upon FoodVision Mini for the rest of the course. 6. PyTorch Going Modular — The whole point of PyTorch is to be able to write Pythonic machine learning code. There are two main tools for writing machine learning code with Python: – A Jupyter/Google Colab notebook (great for experimenting) – Python scripts (great for reproducibility and modularity) In the PyTorch Going Modular section of this course, you’ll learn how to take your most useful Jupyter/Google Colab Notebook code and turn it reusable Python scripts. This is often how you’ll find PyTorch code shared in the wild. 7. PyTorch Transfer Learning — What if you could take what one model has learned and leverage it for your own problems? That’s what PyTorch Transfer Learning covers. You’ll learn about the power of transfer learning and how it enables you to take a machine learning model trained on millions of images, modify it slightly, and enhance the performance of FoodVision Mini, saving you time and resources. 8. PyTorch Experiment Tracking — Now we’re going to start cooking with heat by starting Part 1 of our Milestone Project of the course! At this point you’ll have built plenty of PyTorch models. But how do you keep track of which model performs the best? That’s where PyTorch Experiment Tracking comes in. Following the machine learning practitioner’s motto of experiment, experiment, experiment! you’ll setup a system to keep track of various FoodVision Mini experiment results and then compare them to find the best. 9. PyTorch Paper Replicating — The field of machine learning advances quickly. New research papers get published every day. Being able to read and understand these papers takes time and practice. So that’s what PyTorch Paper Replicating covers. You’ll learn how to go through a machine learning research paper and replicate it with PyTorch code. At this point you’ll also undertake Part 2 of our Milestone Project, where you’ll replicate the groundbreaking Vision Transformer architecture! 10. PyTorch Model Deployment — By this stage your FoodVision model will be performing quite well. But up until now, you’ve been the only one with access to it. How do you get your PyTorch models in the hands of others? That’s what PyTorch Model Deployment covers. In Part 3 of your Milestone Project, you’ll learn how to take the best performing FoodVision Mini model and deploy it to the web so other people can access it and try it out with their own food images. ## Meet your instructor Your PyTorch instructor (Daniel) isn’t just a machine learning engineer with years of real-world professional experience. He has been in your shoes. He makes learning fun. He makes complex topics feel simple. He will motivate you. He will push you. And he will go above and beyond to help you succeed. Hi, I’m Daniel Bourke! Daniel, a self-taught Machine Learning Engineer, has worked at one of Australia’s fastest-growing artificial intelligence agencies, Max Kelsen, and is now using his expertise to teach thousands of students data science and machine learning. ## General Info: Author(s): Daniel Bourke Language: English Updated: 2/2023 Videos Duration: 49h 2m 32s Course Source: https://zerotomastery.io/courses/learn-pytorch/</description>
<size>17130410419</size>
</item><item>
<title>kiwi torrent research - uncensored bittorrent data set</title>
<category>Dataset</category>
<infohash>a7b9f1b1ffd2d7bbf6b11960a0c7bbf3a754e8a6</infohash>
<guid>https://academictorrents.com/details/a7b9f1b1ffd2d7bbf6b11960a0c7bbf3a754e8a6</guid>
<link>https://academictorrents.com/details/a7b9f1b1ffd2d7bbf6b11960a0c7bbf3a754e8a6</link>
<description>kiwi torrent research uncensored bittorrent data set 30/04/2023 75.1 GiB 61,536,875 torrents 710,030,949 files sqlite database: | table | fields | indexed fields | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | torrents | infohash, name, size, uploaded, seeders, leechers, peers, num_files | infohash, size, uploaded | | files | id, name, size | id | | search (virtual, fts5) | name, size, uploaded, filename, filesize | * | torrents inserted in descending order of seeders, leechers, peers, uploaded</description>
<size>29616922622</size>
</item><item>
<title>April 2023 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>d9e554f4f0c3047d9f49e448a7004f7aa1701b69</infohash>
<guid>https://academictorrents.com/details/d9e554f4f0c3047d9f49e448a7004f7aa1701b69</guid>
<link>https://academictorrents.com/details/d9e554f4f0c3047d9f49e448a7004f7aa1701b69</link>
<description>Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through April 2023 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org And, if you do use our API, we encourage you to read the section of the documentation on "etiquette". That is, how to use the API without making it impossible for others to use.</description>
<size>185875088653</size>
</item><item>
<title>Udemy - MongoDB - The Complete Developer's Guide</title>
<category>Course</category>
<infohash>67511d8777c824dcde4ceba01cfc9c4931c4df6a</infohash>
<guid>https://academictorrents.com/details/67511d8777c824dcde4ceba01cfc9c4931c4df6a</guid>
<link>https://academictorrents.com/details/67511d8777c824dcde4ceba01cfc9c4931c4df6a</link>
<description/>
<size>6044495488</size>
</item><item>
<title>Udemy - Angular &amp; NodeJS - The MEAN Stack Guide</title>
<category>Course</category>
<infohash>0aa8701e7ac69f17b3528876fbfb50d4ae0aa9a4</infohash>
<guid>https://academictorrents.com/details/0aa8701e7ac69f17b3528876fbfb50d4ae0aa9a4</guid>
<link>https://academictorrents.com/details/0aa8701e7ac69f17b3528876fbfb50d4ae0aa9a4</link>
<description/>
<size>8108686573</size>
</item><item>
<title>Udemy - The Complete Flutter Development Bootcamp with Dart</title>
<category>Course</category>
<infohash>99433c1797b219a1ec5dcec21fa4213881e5b194</infohash>
<guid>https://academictorrents.com/details/99433c1797b219a1ec5dcec21fa4213881e5b194</guid>
<link>https://academictorrents.com/details/99433c1797b219a1ec5dcec21fa4213881e5b194</link>
<description/>
<size>13151613294</size>
</item><item>
<title>Udemy - Artificial Intelligence A-Z Learn How To Build An AI</title>
<category>Course</category>
<infohash>750ab85e01a0d443bd0d19e49b250f8896e1e791</infohash>
<guid>https://academictorrents.com/details/750ab85e01a0d443bd0d19e49b250f8896e1e791</guid>
<link>https://academictorrents.com/details/750ab85e01a0d443bd0d19e49b250f8896e1e791</link>
<description/>
<size>2601915325</size>
</item><item>
<title>Udemy - Deep Learning and Computer Vision A-Z OpenCV, SSD &amp; GANs</title>
<category>Course</category>
<infohash>d131e045c7fec5256025d72035a00e9fc69f24e7</infohash>
<guid>https://academictorrents.com/details/d131e045c7fec5256025d72035a00e9fc69f24e7</guid>
<link>https://academictorrents.com/details/d131e045c7fec5256025d72035a00e9fc69f24e7</link>
<description/>
<size>4636240336</size>
</item><item>
<title>Udemy - Docker and Kubernetes The Complete Guide</title>
<category>Course</category>
<infohash>1f8c426ea3909c9b5b083f06bebcfee24fea03ae</infohash>
<guid>https://academictorrents.com/details/1f8c426ea3909c9b5b083f06bebcfee24fea03ae</guid>
<link>https://academictorrents.com/details/1f8c426ea3909c9b5b083f06bebcfee24fea03ae</link>
<description/>
<size>12673349618</size>
</item><item>
<title>Udemy - Learn Ethical Hacking From Scratch</title>
<category>Course</category>
<infohash>dd0942e4a1782c0a662e9c16682ab977da156670</infohash>
<guid>https://academictorrents.com/details/dd0942e4a1782c0a662e9c16682ab977da156670</guid>
<link>https://academictorrents.com/details/dd0942e4a1782c0a662e9c16682ab977da156670</link>
<description/>
<size>9144824964</size>
</item><item>
<title>Udemy - NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)</title>
<category>Course</category>
<infohash>d10602c009ef3b4794994976476399865c76956d</infohash>
<guid>https://academictorrents.com/details/d10602c009ef3b4794994976476399865c76956d</guid>
<link>https://academictorrents.com/details/d10602c009ef3b4794994976476399865c76956d</link>
<description/>
<size>18592827396</size>
</item><item>
<title>Udemy - The Complete Android N Developer Course</title>
<category>Course</category>
<infohash>d9b124bec55927b282f19dd143b367f0f3f79be1</infohash>
<guid>https://academictorrents.com/details/d9b124bec55927b282f19dd143b367f0f3f79be1</guid>
<link>https://academictorrents.com/details/d9b124bec55927b282f19dd143b367f0f3f79be1</link>
<description/>
<size>7194560255</size>
</item><item>
<title>Udemy - Machine Learning A-Z  Become Kaggle Master</title>
<category>Course</category>
<infohash>9e378efb6e2f67de46c6c3660d9675be50bfc21f</infohash>
<guid>https://academictorrents.com/details/9e378efb6e2f67de46c6c3660d9675be50bfc21f</guid>
<link>https://academictorrents.com/details/9e378efb6e2f67de46c6c3660d9675be50bfc21f</link>
<description/>
<size>15004863898</size>
</item><item>
<title>MacTeX-2023.pkg</title>
<category>Dataset</category>
<infohash>e9735003fa77ab3cedace4e661adf2d50baa5184</infohash>
<guid>https://academictorrents.com/details/e9735003fa77ab3cedace4e661adf2d50baa5184</guid>
<link>https://academictorrents.com/details/e9735003fa77ab3cedace4e661adf2d50baa5184</link>
<description/>
<size>5514266910</size>
</item><item>
<title>Count of comments/submissions per subreddit per month</title>
<category>Dataset</category>
<infohash>afc7da0f1bfb3c9f8a2fba1438f8f6f2b9d099cf</infohash>
<guid>https://academictorrents.com/details/afc7da0f1bfb3c9f8a2fba1438f8f6f2b9d099cf</guid>
<link>https://academictorrents.com/details/afc7da0f1bfb3c9f8a2fba1438f8f6f2b9d099cf</link>
<description>This is separate monthly sorted lists of all subreddits on reddit that are found in the pushshift dump files, through the end of 2022 https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee Each line is a subreddit, followed by a tab, followed by an integer of the total number of comments and submissions in that subreddit across all it s history.</description>
<size>881102260</size>
</item><item>
<title>TBX11K Tuberculosis Classification and Detection Challenge</title>
<category>Dataset</category>
<infohash>07a9e9d43be209b1547f4829c9cb376f30551d6c</infohash>
<guid>https://academictorrents.com/details/07a9e9d43be209b1547f4829c9cb376f30551d6c</guid>
<link>https://academictorrents.com/details/07a9e9d43be209b1547f4829c9cb376f30551d6c</link>
<description>As a serious infectious disease, tuberculosis (TB) is one of the major threats to human health worldwide, leading to millions of death every year. Although early diagnosis and treatment can greatly improve the chances of survival, it remains a major challenge, especially in developing countries. Computer-aided tuberculosis diagnosis (CTD) is a promising choice for TB diagnosis due to the great successes of deep learning. However, when it comes to TB diagnosis, the lack of training data has hampered the progress of CTD. To solve this problem, we establish a large-scale TB dataset, namely Tuberculosis X-ray (TBX11K) dataset. This dataset contains 11200 X-ray images with corresponding bounding box annotations for TB areas, while the existing largest public TB dataset only has 662 X-ray images with corresponding image-level annotations. The proposed dataset enables the training of sophisticated detectors for high-quality CTD. Rethinking Computer-Aided Tuberculosis Diagnosis, Yun Liu*, Yu-Huan Wu*, Yunfeng Ban, Huifang Wang, Ming-Ming Cheng, IEEE CVPR, 2020. https://i.imgur.com/js4y8dv.jpg</description>
<size>3306757175</size>
</item><item>
<title>Reddit comments/submissions 2023-02</title>
<category>Dataset</category>
<infohash>9971c68d2909843a100ae955c6ab6de3e09c04a1</infohash>
<guid>https://academictorrents.com/details/9971c68d2909843a100ae955c6ab6de3e09c04a1</guid>
<link>https://academictorrents.com/details/9971c68d2909843a100ae955c6ab6de3e09c04a1</link>
<description>Reddit comments and submissions from 2023-02 collected by pushshift which can be found here https://files.pushshift.io/reddit/ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>34428747095</size>
</item><item>
<title>Reddit comments/submissions 2023-01</title>
<category>Dataset</category>
<infohash>c861d265525c488a9439fb874bd9c3fc38dcdfa5</infohash>
<guid>https://academictorrents.com/details/c861d265525c488a9439fb874bd9c3fc38dcdfa5</guid>
<link>https://academictorrents.com/details/c861d265525c488a9439fb874bd9c3fc38dcdfa5</link>
<description>Reddit comments and submissions from 2023-01 collected by pushshift which can be found here https://files.pushshift.io/reddit/ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>46980955519</size>
</item><item>
<title>Subreddit comments/submissions 2005-06 to 2022-12</title>
<category>Dataset</category>
<infohash>c398a571976c78d346c325bd75c47b82edf6124e</infohash>
<guid>https://academictorrents.com/details/c398a571976c78d346c325bd75c47b82edf6124e</guid>
<link>https://academictorrents.com/details/c398a571976c78d346c325bd75c47b82edf6124e</link>
<description>This is the top 20,000 subreddits from reddit s history in separate files. You can use your torrent client to only download the subreddit s you re interested in. These are from the pushshift dumps from 2005-06 to 2022-12 which can be found here https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>1657656143449</size>
</item><item>
<title>The WILDTRACK Seven-Camera HD Dataset (repack)</title>
<category>Dataset</category>
<infohash>6d5542b0d245ff4d37680f67f2fb96750e6d8c60</infohash>
<guid>https://academictorrents.com/details/6d5542b0d245ff4d37680f67f2fb96750e6d8c60</guid>
<link>https://academictorrents.com/details/6d5542b0d245ff4d37680f67f2fb96750e6d8c60</link>
<description>The zip file has been repackaged to work with modern unzip tools. The original appeared corrupted if not using the Mac unzip command. The challenging and realistic setup of the ‘WILDTRACK‘ dataset brings multi-camera detection and tracking methods into the wild. It meets the need of the deep learning methods for a large-scale multi-camera dataset of walking pedestrians, where the cameras’ fields of view in large part overlap. Being acquired by current high tech hardware it provides HD resolution data. Further, its high precision joint calibration and synchronization shall allow for development of new algorithms that go beyond what is possible with currently available data-sets. The data acquisition took place in front of the main building of ETH Zurich, Switzerland, during nice weather conditions. The sequences are of resolution 1920×1080 pixels, shot at 60 frames per second. https://i.imgur.com/Hzamclh.jpg ## Description of available files Synchronized frames extracted with a frame rate of 10 fps, 1920×1080 resolution, and which are post-processed to remove the distortion; Calibration files which use the Pinhole camera model, compatible with the projection functions provided in the OpenCV library. Both the extrinsic and the intrinsic calibrations are available; The ground-truth annotations in a ‘json’ file format (please see separate section bellow); For ease in usage for methods focusing on classification, we also provide a file we refer to as ‘positions’ file in ‘json’ file format. For details please refer to the section bellow. Please check for an update of this site, which shell extend the download list with: ## Full videos; Corresponding points annotations which may be used for camera calibration algorithms; A second part of this dataset which albeit not being annotated, can be used for unsupervised methods. ## Positions file The ‘positions file’ allows for omitting the work with calibration files and focusing for instance on classification, while making use of the fact that the cameras are static. It consists of information about where exactly a given set of particular volumes of space project to in all of the views. The height of each volume space corresponds to the one of an average person’s height. We discretize the ground surface as a regular grid. The 3D space occupied if a person is standing at a particular position is modelled by a cylinder positioned centrally on the grid point. Each cylinder projects into each of the separate 2D views as a rectangle whose position in the view is given in pixel coordinates. Using a 480×1440 grid – totalling into 691200 positions – and the provided camera calibration files, we yield such file which is available for download. Each position is assigned an ID using 0-based enumeration ([0, 691199]). The views’ ordering numbers in this file also follow such enumeration, i.e. they range between 0 and 6 inclusively. The positions which are not visible in a given view are assigned coordinates of -1. ## Annotations</description>
<size>62118417713</size>
</item><item>
<title>Paris Buildings dataset</title>
<category>Dataset</category>
<infohash>c7282e45bbcc1c9569b0e488f8e888cce0ec73f0</infohash>
<guid>https://academictorrents.com/details/c7282e45bbcc1c9569b0e488f8e888cce0ec73f0</guid>
<link>https://academictorrents.com/details/c7282e45bbcc1c9569b0e488f8e888cce0ec73f0</link>
<description>The Paris Dataset consists of 6412 images collected from Flickr by searching for particular Paris landmarks.</description>
<size>2612601037</size>
</item><item>
<title>Seeing the Arrow of Time</title>
<category>Dataset</category>
<infohash>2218d0125f356b350d4dc5b5a59cd0a7116ee27a</infohash>
<guid>https://academictorrents.com/details/2218d0125f356b350d4dc5b5a59cd0a7116ee27a</guid>
<link>https://academictorrents.com/details/2218d0125f356b350d4dc5b5a59cd0a7116ee27a</link>
<description>This dataset consists of 6-10 second clip from 180 high-quality videos.  It contains 155 forward sequences and 25 intentionally backward sequences (ie the play direction of the video goes backwards in time).</description>
<size>13346722532</size>
</item><item>
<title>Oxford Buildings dataset</title>
<category>Dataset</category>
<infohash>4f49c36d9f768d21c947d71393cb39c32730a054</infohash>
<guid>https://academictorrents.com/details/4f49c36d9f768d21c947d71393cb39c32730a054</guid>
<link>https://academictorrents.com/details/4f49c36d9f768d21c947d71393cb39c32730a054</link>
<description>The Oxford Buildings Dataset consists of 5062 images collected from Flickr by searching for particular Oxford landmarks.  The collection has been manually annotated to generate a comprehensive ground truth for 11 different landmarks, each represented by 5 possible queries. This gives a set of 55 queries over which an object retrieval system can be evaluated.</description>
<size>4121078166</size>
</item><item>
<title>Sculptures 6k dataset</title>
<category>Dataset</category>
<infohash>45e3d613c72dbd3f7e0a5b27d1955b67ace0a014</infohash>
<guid>https://academictorrents.com/details/45e3d613c72dbd3f7e0a5b27d1955b67ace0a014</guid>
<link>https://academictorrents.com/details/45e3d613c72dbd3f7e0a5b27d1955b67ace0a014</link>
<description>The Sculptures 6k Dataset consists of 6340 images images collected from Flickr by searching for sculptures by Henry Moore and Auguste Rodin.  The dataset is split equally into a train and test set, each containing 3170 images.  For each set 10 different Henry Moore sculptures are chosen as query objects, and for each of these objects 7 images and query regions are defined, thus providing 70 queries for performance evaluation purposes.</description>
<size>801367120</size>
</item><item>
<title>List of all subreddits on reddit</title>
<category>Dataset</category>
<infohash>bdcd92135f8718d4920801bd474638c4708f0995</infohash>
<guid>https://academictorrents.com/details/bdcd92135f8718d4920801bd474638c4708f0995</guid>
<link>https://academictorrents.com/details/bdcd92135f8718d4920801bd474638c4708f0995</link>
<description>This is a sorted list of all subreddits on reddit that are found in the pushshift dump files, through the end of 2022 https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee Each line is a subreddit, followed by a tab, followed by an integer of the total number of comments and submissions in that subreddit across all it s history.</description>
<size>245531295</size>
</item><item>
<title>Antiwork submissions/comments</title>
<category>Dataset</category>
<infohash>299583147a14d0f4c8dcd81f10983a0da8c36ec7</infohash>
<guid>https://academictorrents.com/details/299583147a14d0f4c8dcd81f10983a0da8c36ec7</guid>
<link>https://academictorrents.com/details/299583147a14d0f4c8dcd81f10983a0da8c36ec7</link>
<description>All submissions and comments in r/antiwork from the creation of the subreddit through December 2022. Extracted from the pushshift dump files: https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee An example python script for iterating over the lines in these dumps is here: https://github.com/Watchful1/PushshiftDumps/blob/master/scripts/single_file.py If you are interested in a similar file for another subreddit, feel free to DM u/Watchful1 on reddit</description>
<size>2486306383</size>
</item><item>
<title>DCASE 2022 Task 5: Few-shot Bioacoustic Event Detection Development Set v3</title>
<category>Dataset</category>
<infohash>37bc2e21fa17b7695ccdfe7d44410dda3e3fde16</infohash>
<guid>https://academictorrents.com/details/37bc2e21fa17b7695ccdfe7d44410dda3e3fde16</guid>
<link>https://academictorrents.com/details/37bc2e21fa17b7695ccdfe7d44410dda3e3fde16</link>
<description>The development set for task 5 of DCASE 2022 Few-shot Bioacoustic Event Detection consists of 192 audio files acquired from different bioacoustic sources. The dataset is split into training and validation sets. Multi-class annotations are provided for the training set with positive (POS), negative (NEG) and unknown (UNK) values for each class. UNK indicates uncertainty about a class. Single-class (class of interest) annotations are provided for the validation set, with events marked as positive (POS) or unkwown (UNK) provided for the class of interest. Also available at  https://zenodo.org/record/6482837</description>
<size>6592329239</size>
</item><item>
<title>SegThy Open-Access Dataset for Thyroid and Neck Segmentation</title>
<category>Dataset</category>
<infohash>a6530eb901e8c1c127166d1bebeffb0129f5bf9f</infohash>
<guid>https://academictorrents.com/details/a6530eb901e8c1c127166d1bebeffb0129f5bf9f</guid>
<link>https://academictorrents.com/details/a6530eb901e8c1c127166d1bebeffb0129f5bf9f</link>
<description>## Motivation Ultrasound (US) imaging plays a central role in the diagnosis of thyroid diseases as well as different pathologies of the neck region. Additionally, US is used for treatment planning, in the case of radioiodine therapy, and also as mean of following up on the success of different therapeutic efforts. Yet, in the vast majority of hospitals, 2D freehand US is applied. This type of examination has shown to have a high intra-observer and inter-observer variability [1], and low accuracy in terms of volume prediction in a thyroid volumetry setting using MRI as ground truth. To tackle these problems, we propose the use of 3D US in combination with machine learning (ML) algorithms to automatically segment the most relevant organs in the region, as well as anomalies, such as nodules and tumors. 3D US can be acquired in several ways, e.g. using 3D US probes, robotic US, so-called wobbler US probes, or using freehand tracked US. The last option has gained traction over the last years as it enables to extend almost any existing commercial 2D US at a low cost. On the side of ML, its availability and increasing computing power are flooding the medical world. Yet, ML can only provide trustworthy results if sufficient (annotated) data is available to “learn” from it. This motivated us to create and publish this dataset, and thus make a relevant contribution to the community. ## Citation [1] Tracked 3D ultrasound and deep neural network-based thyroid segmentation reduce interobserver variability in thyroid volumetry; M. Krönke, C. Eilers, D. Dimova, M. Köhler, G. Buschner, L. Schweiger, L. Konstantinidou, M. Makowski, J. Nagarajah, N. Navab, W. Weber, and T. Wendler; PLOS ONE - July 29, 2022 - https://doi.org/10.1371/journal.pone.0268550 ## Dataset Description Sub-dataset 1 This sub-dataset consists of 28 (healthy) volunteers who were scanned using freehand tracked ultrasonography (both sides of their neck, focus on thyroid). A Siemens Acuson NX-3 US machine, combined with a 12MHz VF12-4 probe, was adapted to be tracked using electromagnetic tracking using the Piur tUS system. Additionally, all patients were imaged with MRI (Siemens Biograph mMR) using a T1-weighted VIBE (volumetric interpolated breath-hold) sequence centered in the thyroid area of the neck. The magnetic field was set to 3T. In practical terms, each volunteer in this subdataset has: 1x MRI (T1-weighted VIBE sequence) of the neck area, with voxel size 0.625x0.625x1 mm3 and field of view 320x320x80 mm3. 9x two-sided 3D US covering the thyroid region with voxel size of 0.12x0.12x0.12 mm3 and variable field of view. The nine 3D US were acquired by three physicians (three scans each). Label maps for the thyroid in all MR and US images. Sub-dataset 2 The second subdataset consists of 186 patients undergoing routinary thyroid US either as initial diagnostics means or as follow-up. The same US device and freehand tracked US extension device were used. Each patient in this sub-dataset has 6x two-sided 3D US covering the thyroid. The 6 3D US were acquired by 2 physicians (three scans each). In both dataset, the age and the sex of the patients is included in the metadata. The dataset is currently being extended to include the labels of the trachea, the common carotid arteries, the internal jugular veins, and (if present) thyroid nodules in sub-dataset 1 (in both US and MRI), and of the same organs in sub-dataset 2 in US. See figures as examples for the multilabel annotations. ## License This dataset is licensed under a CC BY license. This means that users of it can distribute, shuffle, adapt, and build upon the material with the sole condition that the creators are acknowledged. The license permits commercial use as long as the authors are cited.</description>
<size>18577127500</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2022-12</title>
<category>Dataset</category>
<infohash>7c0645c94321311bb05bd879ddee4d0eba08aaee</infohash>
<guid>https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee</guid>
<link>https://academictorrents.com/details/7c0645c94321311bb05bd879ddee4d0eba08aaee</link>
<description>Reddit comments and submissions from 2005-06 to 2022-12 collected by pushshift which can be found here https://files.pushshift.io/reddit/ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>1994218462768</size>
</item><item>
<title>CAMUS Cardiac Acquisitions for Multi-structure Ultrasound Segmentation</title>
<category>Dataset</category>
<infohash>ae545c1e3ce045c33942f89e67f618a6439104a6</infohash>
<guid>https://academictorrents.com/details/ae545c1e3ce045c33942f89e67f618a6439104a6</guid>
<link>https://academictorrents.com/details/ae545c1e3ce045c33942f89e67f618a6439104a6</link>
<description>The goal of this project is to provide all the materials to the community to resolve the problem of echocardiographic image segmentation and volume estimation from 2D ultrasound sequences (both two and four-chamber views). To this aim, the following solutions were set-up introduction of the largest publicly-available and fully-annotated dataset for 2D echocardiographic assessment (to our knowledge). The CAMUS dataset, containing 2D apical four-chamber and two-chamber view sequences acquired from 500 patients, is made available for download # Dataset properties The overall CAMUS dataset consists of clinical exams from 500 patients, acquired at the University Hospital of St Etienne (France) and included in this study within the regulation set by the local ethical committee of the hospital after full anonymization. The acquisitions were optimized to perform left ventricle ejection fraction measurements. In order to enforce clinical realism, neither prerequisite nor data selection have been performed. Consequently, - some cases were difficult to trace; - the dataset involves a wide variability of acquisition settings; - for some patients, parts of the wall were not visible in the images; for some cases, the probe orientation recommendation to acquire a rigorous four-chambers view was simply impossible to follow and a five-chambers view was acquired instead. This produced a highly heterogeneous dataset, both in terms of image quality and pathological cases, which is typical of daily clinical practice data. The dataset has been made available to the community HERE. The dataset comprises : i) a training set of 450 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing set composed of 50 new patients. The raw input images are provided through the raw/mhd file format. # Study population Half of the dataset population has a left ventricle ejection fraction lower than 45%, thus being considered at pathological risk (beyond the uncertainty of the measurement). Also, 19% of the images have a poor quality (based on the opinion of one expert), indicating that for this subgroup the localization of the left ventricle endocarium and left ventricle epicardium as well as the estimation of clinical indices are not considered clinically accurate and workable. In classical analysis, poor quality images are usually removed from the dataset because of their clinical uselessness. Therefore, those data were not involved in this project during the computation of the different metrics but were used to study their influence as part of the training and validation sets for deep learning techniques. # Involved systems</description>
<size>3833291884</size>
</item><item>
<title>Elastic Malware Benchmark for Empowering Researchers 2018</title>
<category>Dataset</category>
<infohash>34854ec5114020b33224cedc97fe78731d057df4</infohash>
<guid>https://academictorrents.com/details/34854ec5114020b33224cedc97fe78731d057df4</guid>
<link>https://academictorrents.com/details/34854ec5114020b33224cedc97fe78731d057df4</link>
<description>The EMBER dataset is a collection of features from PE files that serve as a benchmark dataset for researchers. The EMBER2017 dataset contained features from 1.1 million PE files scanned in or before 2017 and the EMBER2018 dataset contains features from 1 million PE files scanned in or before 2018. This repository makes it easy to reproducibly train the benchmark models, extend the provided feature set, or classify new PE files with the benchmark models. This paper describes many more details about the dataset: https://arxiv.org/abs/1804.04637 # Cite H. Anderson and P. Roth, "EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models”, in ArXiv e-prints. Apr. 2018.</description>
<size>1696539273</size>
</item><item>
<title>Elastic Malware Benchmark for Empowering Researchers 2017 Part 2</title>
<category>Dataset</category>
<infohash>37b7632e616a2c61749782c442a1a76be1619ffc</infohash>
<guid>https://academictorrents.com/details/37b7632e616a2c61749782c442a1a76be1619ffc</guid>
<link>https://academictorrents.com/details/37b7632e616a2c61749782c442a1a76be1619ffc</link>
<description>The EMBER dataset is a collection of features from PE files that serve as a benchmark dataset for researchers. The EMBER2017 dataset contained features from 1.1 million PE files scanned in or before 2017 and the EMBER2018 dataset contains features from 1 million PE files scanned in or before 2018. This repository makes it easy to reproducibly train the benchmark models, extend the provided feature set, or classify new PE files with the benchmark models. This paper describes many more details about the dataset: https://arxiv.org/abs/1804.04637 # Cite H. Anderson and P. Roth, "EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models”, in ArXiv e-prints. Apr. 2018.</description>
<size>1751237573</size>
</item><item>
<title>Elastic Malware Benchmark for Empowering Researchers 2017 Part 1</title>
<category>Dataset</category>
<infohash>6554f671056a97155eaa89c9349bbdc26ed48068</infohash>
<guid>https://academictorrents.com/details/6554f671056a97155eaa89c9349bbdc26ed48068</guid>
<link>https://academictorrents.com/details/6554f671056a97155eaa89c9349bbdc26ed48068</link>
<description>The EMBER dataset is a collection of features from PE files that serve as a benchmark dataset for researchers. The EMBER2017 dataset contained features from 1.1 million PE files scanned in or before 2017 and the EMBER2018 dataset contains features from 1 million PE files scanned in or before 2018. This repository makes it easy to reproducibly train the benchmark models, extend the provided feature set, or classify new PE files with the benchmark models. This paper describes many more details about the dataset: https://arxiv.org/abs/1804.04637 # Cite H. Anderson and P. Roth, "EMBER: An Open Dataset for Training Static PE Malware Machine Learning Models”, in ArXiv e-prints. Apr. 2018.</description>
<size>1673963887</size>
</item><item>
<title>HMC-QU echocardiography ultrasound recordings</title>
<category>Dataset</category>
<infohash>11832dbd0b58c1dd9305a10373c9536872dd31af</infohash>
<guid>https://academictorrents.com/details/11832dbd0b58c1dd9305a10373c9536872dd31af</guid>
<link>https://academictorrents.com/details/11832dbd0b58c1dd9305a10373c9536872dd31af</link>
<description>The HMC-QU benchmark dataset is created by the collaboration between Hamad Medical Corporation (HMC), Tampere University, and Qatar University. The usage of data has been approved by the local ethics board of HMC Hospital in February 2019. The dataset includes a collection of apical 4-chamber (A4C) and apical 2-chamber (A2C) view 2D echocardiography recordings obtained during the years 2018 and 2019. The echocardiography recordings are acquired via devices from different vendors that are Phillips and GE Vivid (GE-Health-USA) ultrasound machines. The temporal resolution (frame rate per second) of the echocardiography recordings is 25 fps. The spatial resolution varies from 422x636 to 768x1024 pixels. The dataset can be utilized for both myocardial infarction (heart attack) detection and left ventricle wall segmentation purposes. # Detection of Myocardial Infarction HMC-QU is the first dataset that is shared with the research community serving myocardial infarction (MI) detection on the left ventricle wall of the heart. The recordings are from over 10,000 echos performed in a year including more than 800 cases admitted with acute ST-elevation MI. The patients with MI were treated with coronary angiogram/angioplasty after the diagnosis of acute MI with electrocardiography and cardiac enzymes evidence. The patients had echocardiography recordings obtained within 24 hours of admission or in some cases before they underwent coronary angioplasty. The subjects not diagnosed with MI underwent a required health check and investigation for other reasons in the hospital. The ground-truth labels are provided for each myocardial segment illustrated in Figure 1 as non-MI and MI, where the MI term indicates any sign of regional wall motion abnormality, whereas the subjects without regional wall motion abnormality are assigned to non-MI. The one cardiac cycle frames are predefined for each recording. End-diastole and end-systole frames are defined according to the electrocardiography (ECG) recordings of the patients. For the patients without ECG recordings, the cardiac cycle is defined according to the frames, where the left ventricle area is the largest and smallest. 1.1. Apical 4-chamber HMC-QU dataset consists of 162 A4C view 2D echocardiography recordings. The A4C view recordings belong to 93 MI patients (all first-time and acute MI) and 69 non-MI subjects. 1.2. Apical 2-chamber The dataset consists of 130 A2C view 2D echocardiography recordings that belong to 68 MI patients and 62 non-MI subjects. # Segmentation of the Left Ventricle Wall A subset of 109 A4C view echocardiography recordings has their corresponding ground-truth segmentation masks for the whole left ventricle wall at each frame for one cardiac cycle. This subset includes 72 MI patients and 37 non-MI subjects. The size of the ground-truth segmentation masks is 224x224 in order to have suitable input dimensions for many state-of-the-art deep network topologies. If you use the HMC-QU dataset in your research, please consider citing the publications below: [P1] A. Degerli, S. Kiranyaz, T. Hamid, R. Mazhar, and M. Gabbouj, “Early Myocardial Infarction Detection over Multi-view Echocardiography,” arXiv preprint arXiv:2111.05790v2, 2021, https://doi.org/10.48550/arXiv.2111.05790. [P2] A. Degerli, M. Zabihi, S. Kiranyaz, T. Hamid, R. Mazhar, R. Hamila, and M. Gabbouj, "Early Detection of Myocardial Infarction in Low-Quality Echocardiography," in IEEE Access, vol. 9, pp. 34442-34453, 2021, https://doi.org/10.1109/ACCESS.2021.3059595. [P3] S. Kiranyaz, A. Degerli, T. Hamid, R. Mazhar, R. E. F. Ahmed, R. Abouhasera, M. Zabihi, J. Malik, R. Hamila, and M. Gabbouj, "Left Ventricular Wall Motion Estimation by Active Polynomials for Acute Myocardial Infarction Detection," in IEEE Access, vol. 8, pp. 210301-210317, 2020, https://doi.org/10.1109/ACCESS.2020.3038743. https://i.imgur.com/QKsdWPb.jpg</description>
<size>2492458234</size>
</item><item>
<title>STructured Analysis of the Retina</title>
<category>Dataset</category>
<infohash>e4554cd63400dc13b74477efe98032c10757c269</infohash>
<guid>https://academictorrents.com/details/e4554cd63400dc13b74477efe98032c10757c269</guid>
<link>https://academictorrents.com/details/e4554cd63400dc13b74477efe98032c10757c269</link>
<description>The STARE (STructured Analysis of the Retina) Project was conceived and initiated in 1975 by Michael Goldbaum, M.D., at the University of California, San Diego. It was funded by the U.S. National Institutes of Health . During its history, over thirty people contributed to the project, with backgrounds ranging from medicine to science to engineering. Images and clinical data were provided by the Shiley Eye Center at the University of California, San Diego, and by the Veterans Administration Medical Center in San Diego. I had the pleasure of working on the project from 1996-2004. The contents of this web page reflect my contributions. Please contact me if you have any questions or requests concerning our data or code. Please contact Dr. Goldbaum if you have any requests concerning the current state of the project. # A brief overview of the project An ophthalmologist is a medical doctor that specializes in the structure, function, and diseases of the human eye. During a clinical examination, an opthalmologist notes findings that are visible in the eyes of the subject. The ophthalmologist then uses these findings to reason about the health of the subject. For instance, a patient may exhibit discoloration of the optic nerve, or a narrowing of the blood vessels in the retina. An opthalmologist uses this information to diagnose the patient, as having for instance Coats  disease or a central retinal artery occlusion. A common procedure during an examination is retinal imaging. An optical camera is used to see through the pupil of the eye to the rear inner surface of the eyeball. A picture is taken showing the optic nerve, fovea, surrounding vessels, and the retinal layer. The opthalmologist can then reference this image while considering any observed findings. This research concerns a system to automatically diagnose diseases of the human eye. The system takes as input information observable in a retinal image. This information is formulated to mimic the findings that an ophthalmologist would note during a clinical examination. The main output of the system is a diagnosis formulated to mimic the conclusion that an ophthalmologist would reach about the health of the subject. Our approach breaks the problem into two components. The first component concerns automatically processing a retinal image to denote the important findings. The second component concerns automatically reasoning about the findings to determine a diagnosis. Additional outputs include detailed measurements of the anatomical structures and lesions visible in the retinal image. These measurements are useful for tracking disease severity and the evaluation of treatment progress over time. By collecting a database of measurements for a large number of people, the STARE project could support clinical population studies and intern training. https://i.imgur.com/rMBvdYq.jpg # Papers A lot has been published on this project by many people; these are my two most relevant papers: A. Hoover, V. Kouznetsova and M. Goldbaum, "Locating Blood Vessels in Retinal Images by Piece-wise Threhsold Probing of a Matched Filter Response", IEEE Transactions on Medical Imaging , vol. 19 no. 3, pp. 203-210, March 2000. A. Hoover and M. Goldbaum, "Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels", IEEE Transactions on Medical Imaging , vol. 22 no. 8, pp. 951-958, August 2003.</description>
<size>484369267</size>
</item><item>
<title>Data of the White Matter Hyperintensity (WMH) Segmentation Challenge</title>
<category>Dataset</category>
<infohash>a6d90ae5a9ff4cc8184f122048495fd6bd18d6ba</infohash>
<guid>https://academictorrents.com/details/a6d90ae5a9ff4cc8184f122048495fd6bd18d6ba</guid>
<link>https://academictorrents.com/details/a6d90ae5a9ff4cc8184f122048495fd6bd18d6ba</link>
<description>Data of the WMH Segmentation Challenge, including the training data, test data, manual annotations, and additional manual annotations. Contents: - readme.pdf - training: contains all training data that was originally released - test: contains all test data - additional_annotations: contains additional manual annotations of two extra observers Code: https://github.com/hjkuijf/wmhchallenge https://wmh.isi.uu.nl/ https://i.imgur.com/RJjPBbP.png</description>
<size>8715128611</size>
</item><item>
<title>demo</title>
<category>Dataset</category>
<infohash>e7a8c1393dd418f8ec8383c0c55e70fc65ae44ca</infohash>
<guid>https://academictorrents.com/details/e7a8c1393dd418f8ec8383c0c55e70fc65ae44ca</guid>
<link>https://academictorrents.com/details/e7a8c1393dd418f8ec8383c0c55e70fc65ae44ca</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>676821041</size>
</item><item>
<title>3_1_4_4</title>
<category>Dataset</category>
<infohash>64dbbf6ddf8f3755b2ac54022ed0724cc0c15400</infohash>
<guid>https://academictorrents.com/details/64dbbf6ddf8f3755b2ac54022ed0724cc0c15400</guid>
<link>https://academictorrents.com/details/64dbbf6ddf8f3755b2ac54022ed0724cc0c15400</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>71663131025</size>
</item><item>
<title>3_1_4_3</title>
<category>Dataset</category>
<infohash>eae0b94f6b08ed2d963347d022a6acc25d0152eb</infohash>
<guid>https://academictorrents.com/details/eae0b94f6b08ed2d963347d022a6acc25d0152eb</guid>
<link>https://academictorrents.com/details/eae0b94f6b08ed2d963347d022a6acc25d0152eb</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>41850346305</size>
</item><item>
<title>3_1_4_2</title>
<category>Dataset</category>
<infohash>0bdd40bd2c52deefafb92e319d8edffa6b2d05a5</infohash>
<guid>https://academictorrents.com/details/0bdd40bd2c52deefafb92e319d8edffa6b2d05a5</guid>
<link>https://academictorrents.com/details/0bdd40bd2c52deefafb92e319d8edffa6b2d05a5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>36155204481</size>
</item><item>
<title>3_1_4_1</title>
<category>Dataset</category>
<infohash>fe46d8df7cd23b90aabef15eba78fb7cd108b683</infohash>
<guid>https://academictorrents.com/details/fe46d8df7cd23b90aabef15eba78fb7cd108b683</guid>
<link>https://academictorrents.com/details/fe46d8df7cd23b90aabef15eba78fb7cd108b683</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>39587199671</size>
</item><item>
<title>3_1_3_5</title>
<category>Dataset</category>
<infohash>84ad174f34f9e984a42cc61772ab9f6096abeb18</infohash>
<guid>https://academictorrents.com/details/84ad174f34f9e984a42cc61772ab9f6096abeb18</guid>
<link>https://academictorrents.com/details/84ad174f34f9e984a42cc61772ab9f6096abeb18</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>33154615369</size>
</item><item>
<title>3_1_3_4</title>
<category>Dataset</category>
<infohash>6c8e124629b5cc4bb4ffaced2a1246d7834c6d9f</infohash>
<guid>https://academictorrents.com/details/6c8e124629b5cc4bb4ffaced2a1246d7834c6d9f</guid>
<link>https://academictorrents.com/details/6c8e124629b5cc4bb4ffaced2a1246d7834c6d9f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>50992209670</size>
</item><item>
<title>3_1_3_3</title>
<category>Dataset</category>
<infohash>0b258a67805ca3bf5d9f1734b9812bd9d716e7fd</infohash>
<guid>https://academictorrents.com/details/0b258a67805ca3bf5d9f1734b9812bd9d716e7fd</guid>
<link>https://academictorrents.com/details/0b258a67805ca3bf5d9f1734b9812bd9d716e7fd</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>13324269724</size>
</item><item>
<title>3_1_3_2</title>
<category>Dataset</category>
<infohash>700da3173a602de58abcf87cb236ad76579d05b0</infohash>
<guid>https://academictorrents.com/details/700da3173a602de58abcf87cb236ad76579d05b0</guid>
<link>https://academictorrents.com/details/700da3173a602de58abcf87cb236ad76579d05b0</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>94112775153</size>
</item><item>
<title>3_1_3_1</title>
<category>Dataset</category>
<infohash>8ee7430e8940f1e52801aeebd0df30aa6364845a</infohash>
<guid>https://academictorrents.com/details/8ee7430e8940f1e52801aeebd0df30aa6364845a</guid>
<link>https://academictorrents.com/details/8ee7430e8940f1e52801aeebd0df30aa6364845a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>15256957192</size>
</item><item>
<title>3_1_2_2</title>
<category>Dataset</category>
<infohash>8f7aa85aaae7a2b90af97e31b189fff2de16acd6</infohash>
<guid>https://academictorrents.com/details/8f7aa85aaae7a2b90af97e31b189fff2de16acd6</guid>
<link>https://academictorrents.com/details/8f7aa85aaae7a2b90af97e31b189fff2de16acd6</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>139573201969</size>
</item><item>
<title>3_1_2_1</title>
<category>Dataset</category>
<infohash>01ffac062d45d7911a2ed32439f784817e142c4f</infohash>
<guid>https://academictorrents.com/details/01ffac062d45d7911a2ed32439f784817e142c4f</guid>
<link>https://academictorrents.com/details/01ffac062d45d7911a2ed32439f784817e142c4f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>65352134438</size>
</item><item>
<title>3_1_1_2</title>
<category>Dataset</category>
<infohash>e2a766872fc760b89cec7f30178405c1dbb56de3</infohash>
<guid>https://academictorrents.com/details/e2a766872fc760b89cec7f30178405c1dbb56de3</guid>
<link>https://academictorrents.com/details/e2a766872fc760b89cec7f30178405c1dbb56de3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>127286556800</size>
</item><item>
<title>3_1_1_1</title>
<category>Dataset</category>
<infohash>16800c90c1088b7c170844db15d3084a344c4bc3</infohash>
<guid>https://academictorrents.com/details/16800c90c1088b7c170844db15d3084a344c4bc3</guid>
<link>https://academictorrents.com/details/16800c90c1088b7c170844db15d3084a344c4bc3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>76203588731</size>
</item><item>
<title>2_2_5_1</title>
<category>Dataset</category>
<infohash>e623ecd29f304d7c24292c770b143675586ba097</infohash>
<guid>https://academictorrents.com/details/e623ecd29f304d7c24292c770b143675586ba097</guid>
<link>https://academictorrents.com/details/e623ecd29f304d7c24292c770b143675586ba097</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>84947989771</size>
</item><item>
<title>2_2_4_1</title>
<category>Dataset</category>
<infohash>d5feefe9b9995aef9a46af96277fc82aed142a95</infohash>
<guid>https://academictorrents.com/details/d5feefe9b9995aef9a46af96277fc82aed142a95</guid>
<link>https://academictorrents.com/details/d5feefe9b9995aef9a46af96277fc82aed142a95</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>83726396462</size>
</item><item>
<title>2_2_3_1</title>
<category>Dataset</category>
<infohash>55887ed7b56105b1f9c59e16e678f07138bdd30b</infohash>
<guid>https://academictorrents.com/details/55887ed7b56105b1f9c59e16e678f07138bdd30b</guid>
<link>https://academictorrents.com/details/55887ed7b56105b1f9c59e16e678f07138bdd30b</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>95507474100</size>
</item><item>
<title>2_2_2_1</title>
<category>Dataset</category>
<infohash>059526428b93da9fa3c43bec10482f39f41cd47f</infohash>
<guid>https://academictorrents.com/details/059526428b93da9fa3c43bec10482f39f41cd47f</guid>
<link>https://academictorrents.com/details/059526428b93da9fa3c43bec10482f39f41cd47f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>33626794541</size>
</item><item>
<title>2_2_1_4</title>
<category>Dataset</category>
<infohash>d5c428838a8da6b6c1589e3218b5484110bf9b61</infohash>
<guid>https://academictorrents.com/details/d5c428838a8da6b6c1589e3218b5484110bf9b61</guid>
<link>https://academictorrents.com/details/d5c428838a8da6b6c1589e3218b5484110bf9b61</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54399190761</size>
</item><item>
<title>2_2_1_3</title>
<category>Dataset</category>
<infohash>3e936112d7d94a0219b12a6b7913ad5544ef7157</infohash>
<guid>https://academictorrents.com/details/3e936112d7d94a0219b12a6b7913ad5544ef7157</guid>
<link>https://academictorrents.com/details/3e936112d7d94a0219b12a6b7913ad5544ef7157</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54574789334</size>
</item><item>
<title>2_2_1_2</title>
<category>Dataset</category>
<infohash>8abf043460bfd1bf393ed2c9d5ab2b29fe8984fe</infohash>
<guid>https://academictorrents.com/details/8abf043460bfd1bf393ed2c9d5ab2b29fe8984fe</guid>
<link>https://academictorrents.com/details/8abf043460bfd1bf393ed2c9d5ab2b29fe8984fe</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>30851164770</size>
</item><item>
<title>2_2_1_1</title>
<category>Dataset</category>
<infohash>24b4a5b32af944b071b651085f3f4196e1c596ce</infohash>
<guid>https://academictorrents.com/details/24b4a5b32af944b071b651085f3f4196e1c596ce</guid>
<link>https://academictorrents.com/details/24b4a5b32af944b071b651085f3f4196e1c596ce</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>104974561351</size>
</item><item>
<title>2_1_10_3</title>
<category>Dataset</category>
<infohash>04fcbced6d4cc80e1351b07d13e5b4972f001c34</infohash>
<guid>https://academictorrents.com/details/04fcbced6d4cc80e1351b07d13e5b4972f001c34</guid>
<link>https://academictorrents.com/details/04fcbced6d4cc80e1351b07d13e5b4972f001c34</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>37692912407</size>
</item><item>
<title>2_1_10_2</title>
<category>Dataset</category>
<infohash>3efafabbd74622ea798c26c79d926a4277f792df</infohash>
<guid>https://academictorrents.com/details/3efafabbd74622ea798c26c79d926a4277f792df</guid>
<link>https://academictorrents.com/details/3efafabbd74622ea798c26c79d926a4277f792df</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>31947662108</size>
</item><item>
<title>2_1_10_1</title>
<category>Dataset</category>
<infohash>e833f9fa36aaf7f928db4ac3ff8374e82c958ce5</infohash>
<guid>https://academictorrents.com/details/e833f9fa36aaf7f928db4ac3ff8374e82c958ce5</guid>
<link>https://academictorrents.com/details/e833f9fa36aaf7f928db4ac3ff8374e82c958ce5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>41037244222</size>
</item><item>
<title>2_1_9_2</title>
<category>Dataset</category>
<infohash>63ba6ddc8d46f50d2ddd135e39586e6bd13e48d4</infohash>
<guid>https://academictorrents.com/details/63ba6ddc8d46f50d2ddd135e39586e6bd13e48d4</guid>
<link>https://academictorrents.com/details/63ba6ddc8d46f50d2ddd135e39586e6bd13e48d4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54747598892</size>
</item><item>
<title>2_1_9_1</title>
<category>Dataset</category>
<infohash>edd32dbeb19d35c7700dba803dccbd1a22d13852</infohash>
<guid>https://academictorrents.com/details/edd32dbeb19d35c7700dba803dccbd1a22d13852</guid>
<link>https://academictorrents.com/details/edd32dbeb19d35c7700dba803dccbd1a22d13852</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>129291242505</size>
</item><item>
<title>2_1_8_3</title>
<category>Dataset</category>
<infohash>537ced78b9742cc096ee24b67fbe775d23ebd6e9</infohash>
<guid>https://academictorrents.com/details/537ced78b9742cc096ee24b67fbe775d23ebd6e9</guid>
<link>https://academictorrents.com/details/537ced78b9742cc096ee24b67fbe775d23ebd6e9</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>94280070949</size>
</item><item>
<title>2_1_8_2</title>
<category>Dataset</category>
<infohash>98c98d1d7739867025efe625be49d1a6662859f8</infohash>
<guid>https://academictorrents.com/details/98c98d1d7739867025efe625be49d1a6662859f8</guid>
<link>https://academictorrents.com/details/98c98d1d7739867025efe625be49d1a6662859f8</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>81077651910</size>
</item><item>
<title>2_1_8_1</title>
<category>Dataset</category>
<infohash>0373a66538d045c40f0df54bb6e231925be664c4</infohash>
<guid>https://academictorrents.com/details/0373a66538d045c40f0df54bb6e231925be664c4</guid>
<link>https://academictorrents.com/details/0373a66538d045c40f0df54bb6e231925be664c4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>75607697964</size>
</item><item>
<title>2_1_7_3</title>
<category>Dataset</category>
<infohash>01ffd620f829c433ab43989678c95d1fa06e86c5</infohash>
<guid>https://academictorrents.com/details/01ffd620f829c433ab43989678c95d1fa06e86c5</guid>
<link>https://academictorrents.com/details/01ffd620f829c433ab43989678c95d1fa06e86c5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>65340415873</size>
</item><item>
<title>2_1_7_2</title>
<category>Dataset</category>
<infohash>8063f308aac7ba7d4451f3c3c92a5746051709e4</infohash>
<guid>https://academictorrents.com/details/8063f308aac7ba7d4451f3c3c92a5746051709e4</guid>
<link>https://academictorrents.com/details/8063f308aac7ba7d4451f3c3c92a5746051709e4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>98232057525</size>
</item><item>
<title>2_1_7_1</title>
<category>Dataset</category>
<infohash>bb71258a6b3a1fbaeac554069dd886966d5cb6b1</infohash>
<guid>https://academictorrents.com/details/bb71258a6b3a1fbaeac554069dd886966d5cb6b1</guid>
<link>https://academictorrents.com/details/bb71258a6b3a1fbaeac554069dd886966d5cb6b1</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>79199920564</size>
</item><item>
<title>2_1_6_1</title>
<category>Dataset</category>
<infohash>d79b473252cb3f751e4c90699ed4f7929ea88a0f</infohash>
<guid>https://academictorrents.com/details/d79b473252cb3f751e4c90699ed4f7929ea88a0f</guid>
<link>https://academictorrents.com/details/d79b473252cb3f751e4c90699ed4f7929ea88a0f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>74846911421</size>
</item><item>
<title>2_1_5_3</title>
<category>Dataset</category>
<infohash>c4a015d468731754b2f218177bb5ee2efb3ae518</infohash>
<guid>https://academictorrents.com/details/c4a015d468731754b2f218177bb5ee2efb3ae518</guid>
<link>https://academictorrents.com/details/c4a015d468731754b2f218177bb5ee2efb3ae518</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>53083943861</size>
</item><item>
<title>2_1_5_2</title>
<category>Dataset</category>
<infohash>bca5828b4998a5159484a5611e0bb015b62f9ac5</infohash>
<guid>https://academictorrents.com/details/bca5828b4998a5159484a5611e0bb015b62f9ac5</guid>
<link>https://academictorrents.com/details/bca5828b4998a5159484a5611e0bb015b62f9ac5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>66292723375</size>
</item><item>
<title>2_1_5_1</title>
<category>Dataset</category>
<infohash>0f714caf5de0585afb24ea4a37101f7840a2d61d</infohash>
<guid>https://academictorrents.com/details/0f714caf5de0585afb24ea4a37101f7840a2d61d</guid>
<link>https://academictorrents.com/details/0f714caf5de0585afb24ea4a37101f7840a2d61d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>57867883028</size>
</item><item>
<title>2_1_4_1</title>
<category>Dataset</category>
<infohash>ad9f72da79ed4e4000cd64462f9243cc9bfa91a5</infohash>
<guid>https://academictorrents.com/details/ad9f72da79ed4e4000cd64462f9243cc9bfa91a5</guid>
<link>https://academictorrents.com/details/ad9f72da79ed4e4000cd64462f9243cc9bfa91a5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54458732965</size>
</item><item>
<title>2_1_3_1</title>
<category>Dataset</category>
<infohash>d03eea7dd43c1238115512a70317558095ced0df</infohash>
<guid>https://academictorrents.com/details/d03eea7dd43c1238115512a70317558095ced0df</guid>
<link>https://academictorrents.com/details/d03eea7dd43c1238115512a70317558095ced0df</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>46993719041</size>
</item><item>
<title>2_1_2_3</title>
<category>Dataset</category>
<infohash>6e329a869eb0e3e8c53e79b6039a0b17b9dde42d</infohash>
<guid>https://academictorrents.com/details/6e329a869eb0e3e8c53e79b6039a0b17b9dde42d</guid>
<link>https://academictorrents.com/details/6e329a869eb0e3e8c53e79b6039a0b17b9dde42d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>16251131788</size>
</item><item>
<title>2_1_2_2</title>
<category>Dataset</category>
<infohash>7b38b04f932deef8c9fdf8be723c6c624845c31f</infohash>
<guid>https://academictorrents.com/details/7b38b04f932deef8c9fdf8be723c6c624845c31f</guid>
<link>https://academictorrents.com/details/7b38b04f932deef8c9fdf8be723c6c624845c31f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>84947883573</size>
</item><item>
<title>2_1_2_1</title>
<category>Dataset</category>
<infohash>b9b6c5d82fcc0852887e39793d1924b5c8b01a9e</infohash>
<guid>https://academictorrents.com/details/b9b6c5d82fcc0852887e39793d1924b5c8b01a9e</guid>
<link>https://academictorrents.com/details/b9b6c5d82fcc0852887e39793d1924b5c8b01a9e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>21565836234</size>
</item><item>
<title>2_1_1_1</title>
<category>Dataset</category>
<infohash>bd7a9142c07d2eaf07897901718d873d5413278c</infohash>
<guid>https://academictorrents.com/details/bd7a9142c07d2eaf07897901718d873d5413278c</guid>
<link>https://academictorrents.com/details/bd7a9142c07d2eaf07897901718d873d5413278c</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>10325975340</size>
</item><item>
<title>1_2_7_1</title>
<category>Dataset</category>
<infohash>475f9cabb7bf030444f06ce9f25f8e7265e72b53</infohash>
<guid>https://academictorrents.com/details/475f9cabb7bf030444f06ce9f25f8e7265e72b53</guid>
<link>https://academictorrents.com/details/475f9cabb7bf030444f06ce9f25f8e7265e72b53</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7163722961</size>
</item><item>
<title>1_2_6_1</title>
<category>Dataset</category>
<infohash>6359435c2014b1d7794ce6827d0c3dadea428646</infohash>
<guid>https://academictorrents.com/details/6359435c2014b1d7794ce6827d0c3dadea428646</guid>
<link>https://academictorrents.com/details/6359435c2014b1d7794ce6827d0c3dadea428646</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>12985448393</size>
</item><item>
<title>1_2_5_3</title>
<category>Dataset</category>
<infohash>2a5ad6fdcb3a6de20522c2e32706d2bcb7d51505</infohash>
<guid>https://academictorrents.com/details/2a5ad6fdcb3a6de20522c2e32706d2bcb7d51505</guid>
<link>https://academictorrents.com/details/2a5ad6fdcb3a6de20522c2e32706d2bcb7d51505</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>87950068343</size>
</item><item>
<title>1_2_5_2</title>
<category>Dataset</category>
<infohash>43ccef4451d26d2c87c50fe396e834f963220964</infohash>
<guid>https://academictorrents.com/details/43ccef4451d26d2c87c50fe396e834f963220964</guid>
<link>https://academictorrents.com/details/43ccef4451d26d2c87c50fe396e834f963220964</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>45595967510</size>
</item><item>
<title>1_2_5_1</title>
<category>Dataset</category>
<infohash>e4515771664ae37d487b89e5176f1c6d934caa0d</infohash>
<guid>https://academictorrents.com/details/e4515771664ae37d487b89e5176f1c6d934caa0d</guid>
<link>https://academictorrents.com/details/e4515771664ae37d487b89e5176f1c6d934caa0d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>64470276981</size>
</item><item>
<title>1_2_4_3</title>
<category>Dataset</category>
<infohash>4e1cce191d2deafda63ba7f1eba7c678e8a2a554</infohash>
<guid>https://academictorrents.com/details/4e1cce191d2deafda63ba7f1eba7c678e8a2a554</guid>
<link>https://academictorrents.com/details/4e1cce191d2deafda63ba7f1eba7c678e8a2a554</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>86894441054</size>
</item><item>
<title>1_2_4_2</title>
<category>Dataset</category>
<infohash>9ed847f9636558f03ec612c663bbc46e462cf0b9</infohash>
<guid>https://academictorrents.com/details/9ed847f9636558f03ec612c663bbc46e462cf0b9</guid>
<link>https://academictorrents.com/details/9ed847f9636558f03ec612c663bbc46e462cf0b9</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>28298804773</size>
</item><item>
<title>1_2_4_1</title>
<category>Dataset</category>
<infohash>5571eb63e32216c0de1d5b268d543e7a92791a0d</infohash>
<guid>https://academictorrents.com/details/5571eb63e32216c0de1d5b268d543e7a92791a0d</guid>
<link>https://academictorrents.com/details/5571eb63e32216c0de1d5b268d543e7a92791a0d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>64761651128</size>
</item><item>
<title>1_2_3_1</title>
<category>Dataset</category>
<infohash>65e26cea7c4b69f2b2bedd9bd5bb4b34ad908dcd</infohash>
<guid>https://academictorrents.com/details/65e26cea7c4b69f2b2bedd9bd5bb4b34ad908dcd</guid>
<link>https://academictorrents.com/details/65e26cea7c4b69f2b2bedd9bd5bb4b34ad908dcd</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>3564126168</size>
</item><item>
<title>1_2_2_1</title>
<category>Dataset</category>
<infohash>1ff4aa86fafa452e2045c6c451d96c6f86cc5d7e</infohash>
<guid>https://academictorrents.com/details/1ff4aa86fafa452e2045c6c451d96c6f86cc5d7e</guid>
<link>https://academictorrents.com/details/1ff4aa86fafa452e2045c6c451d96c6f86cc5d7e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>6644144589</size>
</item><item>
<title>1_2_1_11</title>
<category>Dataset</category>
<infohash>70fa11b162517ce54ef5d6d4c64e07b5359ff3de</infohash>
<guid>https://academictorrents.com/details/70fa11b162517ce54ef5d6d4c64e07b5359ff3de</guid>
<link>https://academictorrents.com/details/70fa11b162517ce54ef5d6d4c64e07b5359ff3de</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>36333116718</size>
</item><item>
<title>1_2_1_10</title>
<category>Dataset</category>
<infohash>3bea9880f2d953500174c8fa13001fe509a8a1b6</infohash>
<guid>https://academictorrents.com/details/3bea9880f2d953500174c8fa13001fe509a8a1b6</guid>
<link>https://academictorrents.com/details/3bea9880f2d953500174c8fa13001fe509a8a1b6</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>24222168206</size>
</item><item>
<title>1_2_1_9</title>
<category>Dataset</category>
<infohash>76db1dc8908c07a24c620e18a38b3c37ac15ca0b</infohash>
<guid>https://academictorrents.com/details/76db1dc8908c07a24c620e18a38b3c37ac15ca0b</guid>
<link>https://academictorrents.com/details/76db1dc8908c07a24c620e18a38b3c37ac15ca0b</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>33689448848</size>
</item><item>
<title>1_2_1_8</title>
<category>Dataset</category>
<infohash>c5f53de1b283ef6f2617a7493031d1e385bf9718</infohash>
<guid>https://academictorrents.com/details/c5f53de1b283ef6f2617a7493031d1e385bf9718</guid>
<link>https://academictorrents.com/details/c5f53de1b283ef6f2617a7493031d1e385bf9718</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>22015142519</size>
</item><item>
<title>1_2_1_7</title>
<category>Dataset</category>
<infohash>0c56b19033a66304b242155968739ec23ca45e9c</infohash>
<guid>https://academictorrents.com/details/0c56b19033a66304b242155968739ec23ca45e9c</guid>
<link>https://academictorrents.com/details/0c56b19033a66304b242155968739ec23ca45e9c</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>28996527015</size>
</item><item>
<title>1_2_1_6</title>
<category>Dataset</category>
<infohash>43bc1b5df262eff774da60131bb8901431608d2a</infohash>
<guid>https://academictorrents.com/details/43bc1b5df262eff774da60131bb8901431608d2a</guid>
<link>https://academictorrents.com/details/43bc1b5df262eff774da60131bb8901431608d2a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17854027715</size>
</item><item>
<title>1_2_1_5</title>
<category>Dataset</category>
<infohash>6334b3693a60f8f84affe00c077c8ea427c91aaf</infohash>
<guid>https://academictorrents.com/details/6334b3693a60f8f84affe00c077c8ea427c91aaf</guid>
<link>https://academictorrents.com/details/6334b3693a60f8f84affe00c077c8ea427c91aaf</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7990103818</size>
</item><item>
<title>1_2_1_4</title>
<category>Dataset</category>
<infohash>1b3c3b76e8e3348f8ee783d6d75d7264422d5633</infohash>
<guid>https://academictorrents.com/details/1b3c3b76e8e3348f8ee783d6d75d7264422d5633</guid>
<link>https://academictorrents.com/details/1b3c3b76e8e3348f8ee783d6d75d7264422d5633</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>6969547362</size>
</item><item>
<title>1_2_1_3</title>
<category>Dataset</category>
<infohash>39fc3f7ab5c7d8a5240c76c7dbde0d5f0d18a09f</infohash>
<guid>https://academictorrents.com/details/39fc3f7ab5c7d8a5240c76c7dbde0d5f0d18a09f</guid>
<link>https://academictorrents.com/details/39fc3f7ab5c7d8a5240c76c7dbde0d5f0d18a09f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>38010471533</size>
</item><item>
<title>1_2_1_2</title>
<category>Dataset</category>
<infohash>0af65d07a29c97c7aeadcb31a47f6a1c92f77a4f</infohash>
<guid>https://academictorrents.com/details/0af65d07a29c97c7aeadcb31a47f6a1c92f77a4f</guid>
<link>https://academictorrents.com/details/0af65d07a29c97c7aeadcb31a47f6a1c92f77a4f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>14334426981</size>
</item><item>
<title>1_2_1_1</title>
<category>Dataset</category>
<infohash>6995645a7cf45dfceba346db81a0e66cb78e5bed</infohash>
<guid>https://academictorrents.com/details/6995645a7cf45dfceba346db81a0e66cb78e5bed</guid>
<link>https://academictorrents.com/details/6995645a7cf45dfceba346db81a0e66cb78e5bed</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>13520366451</size>
</item><item>
<title>1_1_7_1</title>
<category>Dataset</category>
<infohash>2625e5172454f1bf85808cc7b8442ae45caf1b3a</infohash>
<guid>https://academictorrents.com/details/2625e5172454f1bf85808cc7b8442ae45caf1b3a</guid>
<link>https://academictorrents.com/details/2625e5172454f1bf85808cc7b8442ae45caf1b3a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>31849918848</size>
</item><item>
<title>1_1_6_2</title>
<category>Dataset</category>
<infohash>f6b32ef35d4349ac074e1a8a438382beb38d15b0</infohash>
<guid>https://academictorrents.com/details/f6b32ef35d4349ac074e1a8a438382beb38d15b0</guid>
<link>https://academictorrents.com/details/f6b32ef35d4349ac074e1a8a438382beb38d15b0</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>48642591610</size>
</item><item>
<title>1_1_6_1</title>
<category>Dataset</category>
<infohash>d963006e92152d9d08612db29851446fae9e065d</infohash>
<guid>https://academictorrents.com/details/d963006e92152d9d08612db29851446fae9e065d</guid>
<link>https://academictorrents.com/details/d963006e92152d9d08612db29851446fae9e065d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17952112140</size>
</item><item>
<title>1_1_5_2</title>
<category>Dataset</category>
<infohash>8d78de35dcdf56b0025a1f0154b2239618eac2f7</infohash>
<guid>https://academictorrents.com/details/8d78de35dcdf56b0025a1f0154b2239618eac2f7</guid>
<link>https://academictorrents.com/details/8d78de35dcdf56b0025a1f0154b2239618eac2f7</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>3507339460</size>
</item><item>
<title>1_1_5_1</title>
<category>Dataset</category>
<infohash>48e240ef6c297b8b3cfc5c1af5582dbd3dc1c68a</infohash>
<guid>https://academictorrents.com/details/48e240ef6c297b8b3cfc5c1af5582dbd3dc1c68a</guid>
<link>https://academictorrents.com/details/48e240ef6c297b8b3cfc5c1af5582dbd3dc1c68a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>15757617896</size>
</item><item>
<title>1_1_4_3</title>
<category>Dataset</category>
<infohash>0e88e2aacf1677f11ccf82cad81e0131868b403f</infohash>
<guid>https://academictorrents.com/details/0e88e2aacf1677f11ccf82cad81e0131868b403f</guid>
<link>https://academictorrents.com/details/0e88e2aacf1677f11ccf82cad81e0131868b403f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17212635074</size>
</item><item>
<title>1_1_4_2</title>
<category>Dataset</category>
<infohash>02f30676bcaee9824f54577c18759caa8a2040d0</infohash>
<guid>https://academictorrents.com/details/02f30676bcaee9824f54577c18759caa8a2040d0</guid>
<link>https://academictorrents.com/details/02f30676bcaee9824f54577c18759caa8a2040d0</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>15195444661</size>
</item><item>
<title>1_1_4_1</title>
<category>Dataset</category>
<infohash>45ec2438ecb78e1de6149b4bf9afc87efc7b2148</infohash>
<guid>https://academictorrents.com/details/45ec2438ecb78e1de6149b4bf9afc87efc7b2148</guid>
<link>https://academictorrents.com/details/45ec2438ecb78e1de6149b4bf9afc87efc7b2148</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17947276755</size>
</item><item>
<title>1_1_3_1</title>
<category>Dataset</category>
<infohash>58d95be730f1f154e10e482b9ac991b930ead4f3</infohash>
<guid>https://academictorrents.com/details/58d95be730f1f154e10e482b9ac991b930ead4f3</guid>
<link>https://academictorrents.com/details/58d95be730f1f154e10e482b9ac991b930ead4f3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>18280808161</size>
</item><item>
<title>1_1_2_6</title>
<category>Dataset</category>
<infohash>3b815005bae9f56192bfbc24bf3c79a3ad289d02</infohash>
<guid>https://academictorrents.com/details/3b815005bae9f56192bfbc24bf3c79a3ad289d02</guid>
<link>https://academictorrents.com/details/3b815005bae9f56192bfbc24bf3c79a3ad289d02</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>21786631311</size>
</item><item>
<title>1_1_2_5</title>
<category>Dataset</category>
<infohash>e154966a72f95faf23747e91a68b20d49a775acc</infohash>
<guid>https://academictorrents.com/details/e154966a72f95faf23747e91a68b20d49a775acc</guid>
<link>https://academictorrents.com/details/e154966a72f95faf23747e91a68b20d49a775acc</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>14773414040</size>
</item><item>
<title>1_1_2_4</title>
<category>Dataset</category>
<infohash>af96c679eee18f501fc6b51cf0d944f15f0727a6</infohash>
<guid>https://academictorrents.com/details/af96c679eee18f501fc6b51cf0d944f15f0727a6</guid>
<link>https://academictorrents.com/details/af96c679eee18f501fc6b51cf0d944f15f0727a6</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>11815072850</size>
</item><item>
<title>1_1_2_3</title>
<category>Dataset</category>
<infohash>c4b353e621b3874e75719850468620d6f661a1cb</infohash>
<guid>https://academictorrents.com/details/c4b353e621b3874e75719850468620d6f661a1cb</guid>
<link>https://academictorrents.com/details/c4b353e621b3874e75719850468620d6f661a1cb</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>13282462588</size>
</item><item>
<title>1_1_2_2</title>
<category>Dataset</category>
<infohash>7e4b5001aa8fdc371d9bac0ac9e3ff06addcbc9f</infohash>
<guid>https://academictorrents.com/details/7e4b5001aa8fdc371d9bac0ac9e3ff06addcbc9f</guid>
<link>https://academictorrents.com/details/7e4b5001aa8fdc371d9bac0ac9e3ff06addcbc9f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7812027893</size>
</item><item>
<title>1_1_2_1</title>
<category>Dataset</category>
<infohash>97c71c7998a60109d33e4603937a51bf6ea5354b</infohash>
<guid>https://academictorrents.com/details/97c71c7998a60109d33e4603937a51bf6ea5354b</guid>
<link>https://academictorrents.com/details/97c71c7998a60109d33e4603937a51bf6ea5354b</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>12165895387</size>
</item><item>
<title>1_1_1_2</title>
<category>Dataset</category>
<infohash>e05cd314b77270d87a4dcf99bdedfdda4b674641</infohash>
<guid>https://academictorrents.com/details/e05cd314b77270d87a4dcf99bdedfdda4b674641</guid>
<link>https://academictorrents.com/details/e05cd314b77270d87a4dcf99bdedfdda4b674641</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17905391784</size>
</item><item>
<title>1_1_1_1</title>
<category>Dataset</category>
<infohash>87d23552467414c01b4291b4d120a9792bcc3233</infohash>
<guid>https://academictorrents.com/details/87d23552467414c01b4291b4d120a9792bcc3233</guid>
<link>https://academictorrents.com/details/87d23552467414c01b4291b4d120a9792bcc3233</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>9988949239</size>
</item><item>
<title>High-resolution gravity maps of the Earth, Moon and Mars</title>
<category>Dataset</category>
<infohash>9eca33223c8b37b7b0ec44355e1abfb812fd43f9</infohash>
<guid>https://academictorrents.com/details/9eca33223c8b37b7b0ec44355e1abfb812fd43f9</guid>
<link>https://academictorrents.com/details/9eca33223c8b37b7b0ec44355e1abfb812fd43f9</link>
<description>Provided is a collection of gravity maps of the Earth, Moon and Mars developed by Christian Hirt et al.  Most of the models rely on gravity-forward modelling of topographic masses, reaching resolutions as high as 3 arcsec (SRTM2gravity).  The maps can be used, for instance, to significantly improve the accuracy and resolution of global spherical harmonic gravity models such as EGM2008.  Some of the models, (e.g., GGMplus) already contain the global long-wavelength information from spherical harmonic models, so can be used on their own.  For each model, attached are manuscripts with detailed information on the development and the use of the models as well as on their authors.</description>
<size>414894493125</size>
</item><item>
<title>TotalSegmentator CT Dataset</title>
<category>Dataset</category>
<infohash>337819f0e83a1c1ac1b7262385609dad5d485abf</infohash>
<guid>https://academictorrents.com/details/337819f0e83a1c1ac1b7262385609dad5d485abf</guid>
<link>https://academictorrents.com/details/337819f0e83a1c1ac1b7262385609dad5d485abf</link>
<description>https://i.imgur.com/u63xva0.png In 1204 CT images we segmented 104 anatomical structures (27 organs, 59 bones, 10 muscles, 8 vessels) covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions.     s0720/segmentations/portal_vein_and_splenic_vein.nii.gz	187.74kB s0720/segmentations/pancreas.nii.gz	45.25kB s0720/segmentations/lung_upper_lobe_right.nii.gz	218.92kB s0720/segmentations/lung_upper_lobe_left.nii.gz	230.82kB s0720/segmentations/lung_middle_lobe_right.nii.gz	201.18kB s0720/segmentations/lung_lower_lobe_right.nii.gz	240.63kB s0720/segmentations/lung_lower_lobe_left.nii.gz	239.49kB s0720/segmentations/liver.nii.gz	273.08kB s0720/segmentations/kidney_right.nii.gz	198.91kB s0720/segmentations/kidney_left.nii.gz	197.82kB s0720/segmentations/inferior_vena_cava.nii.gz	48.43kB s0720/segmentations/iliopsoas_right.nii.gz	59.12kB s0720/segmentations/iliopsoas_left.nii.gz	59.75kB s0720/segmentations/iliac_vena_right.nii.gz	188.90kB s0720/segmentations/iliac_vena_left.nii.gz	189.66kB s0720/segmentations/iliac_artery_right.nii.gz	186.75kB s0720/segmentations/iliac_artery_left.nii.gz	186.60kB s0720/segmentations/humerus_right.nii.gz	41.96kB s0720/segmentations/humerus_left.nii.gz	43.13kB s0720/segmentations/hip_right.nii.gz	223.33kB s0720/segmentations/hip_left.nii.gz	223.06kB s0720/segmentations/heart_ventricle_right.nii.gz	48.07kB s0720/segmentations/heart_ventricle_left.nii.gz	45.10kB s0720/segmentations/heart_myocardium.nii.gz	49.17kB s0720/segmentations/heart_atrium_right.nii.gz	44.41kB s0720/segmentations/heart_atrium_left.nii.gz	43.02kB s0720/segmentations/gluteus_minimus_right.nii.gz	46.65kB s0720/segmentations/gluteus_minimus_left.nii.gz	45.95kB s0720/segmentations/gluteus_medius_right.nii.gz	53.75kB s0720/segmentations/gluteus_medius_left.nii.gz	52.68kB s0720/segmentations/gluteus_maximus_right.nii.gz	58.02kB s0720/segmentations/gluteus_maximus_left.nii.gz	56.20kB s0720/segmentations/gallbladder.nii.gz	42.20kB s0720/segmentations/femur_right.nii.gz	192.93kB s0720/segmentations/femur_left.nii.gz	193.47kB s0720/segmentations/face.nii.gz	183.15kB s0720/segmentations/esophagus.nii.gz	188.93kB s0720/segmentations/duodenum.nii.gz	189.53kB s0720/segmentations/colon.nii.gz	239.38kB s0720/segmentations/clavicula_right.nii.gz	42.92kB s0720/segmentations/clavicula_left.nii.gz	42.50kB s0720/segmentations/brain.nii.gz	183.15kB s0720/segmentations/autochthon_right.nii.gz	62.97kB s0720/segmentations/autochthon_left.nii.gz	63.75kB s0720/segmentations/aorta.nii.gz	202.39kB s0720/segmentations/adrenal_gland_right.nii.gz	184.50kB s0720/segmentations/adrenal_gland_left.nii.gz	184.35kB s0720/ct.nii.gz     https://arxiv.org/abs/2208.05868 https://zenodo.org/record/6802614</description>
<size>28404091806</size>
</item><item>
<title>CVPR_2022_papers</title>
<category>Paper</category>
<infohash>3f6bb994f7bdb2c2976c5208438114d08b586f12</infohash>
<guid>https://academictorrents.com/details/3f6bb994f7bdb2c2976c5208438114d08b586f12</guid>
<link>https://academictorrents.com/details/3f6bb994f7bdb2c2976c5208438114d08b586f12</link>
<description>Compilation of all papers from the CVPR, publicly available at https://openaccess.thecvf.com/CVPR2022. These CVPR 2022 papers are the Open Access versions, provided by the Computer Vision Foundation.</description>
<size>16607375959</size>
</item><item>
<title>VGG Human Pose Estimation datasets</title>
<category>Dataset</category>
<infohash>34f2197d360ac8453b33f50d09e452d504d30cbb</infohash>
<guid>https://academictorrents.com/details/34f2197d360ac8453b33f50d09e452d504d30cbb</guid>
<link>https://academictorrents.com/details/34f2197d360ac8453b33f50d09e452d504d30cbb</link>
<description>The VGG Human Pose Estimation datasets is a set of large video datasets annotated with human upper-body pose.  It is comprised of the "YouTube Pose", "BBC Pose", "Extended BBC Pose", "Short BBC Pose", and "ChaLearn Pose" datasets.</description>
<size>293598028318</size>
</item><item>
<title>Dancho Danchev's Ultimate "Cybercrime Research and Cybercrime Forum Data Set USB Compilation" 2022 - 265GB - [RAR] Compilation</title>
<category>Dataset</category>
<infohash>004cd44d17df7b12cf95df4266e0dbc5f86e08d7</infohash>
<guid>https://academictorrents.com/details/004cd44d17df7b12cf95df4266e0dbc5f86e08d7</guid>
<link>https://academictorrents.com/details/004cd44d17df7b12cf95df4266e0dbc5f86e08d7</link>
<description>Dancho Danchev is the world s leading expert in the field of cybercrime fighting and threat intelligence gathering having actively pioneered his own methodlogy for processing threat intelligence leading to a successful set of hundreas of high-quality anaysis and research articles published at the industry s leading threat intelligence blog - ZDNet s Zero Day, Dancho Danchev s Mind Streams of Information Security Knowledge and Webroot s Threat Blog with his research featured in Techmeme, ZDNet, CNN, PCWorld, SCMagazine, TheRegister, NYTimes, CNET, ComputerWorld, H+Magazine currently producing threat intelligence at the industry s leading threat intelligence blog - Dancho Danchev s - Mind Streams of Information Security Knowledge. With his research featured at RSA Europe, CyberCamp, InfoSec, GCHQ and Interpol the researcher continues to actively produce threat intelligence at the industry s leading threat intelligence blog - Dancho Danchev s - Mind Streams of Information Security Knowledge publishing a diverse set of hundreds of high-quality research analysis detailing the malicious and fraudulent activities at nation-state and malicious actors across the globe. This Torrent is his official "Cybercrime Research USB Stick Compilation" which basically represents all of this public research conducted and distributed publicly throughout the period of December, 2005 up to 2021 which includes full offline copies of his research available in multiple E-Book reader format and include full offline copy of one of the security industry s most popular security publications which is his personal blog - Dancho Danchev s Blog - Mind Streams of Information Security Knowledge", all of his publicly accessible research for ZDNet s Zero Day Blog including all of his research for Webroot Inc. including a copy of his old Twitter account for reference and research purposes where he participated with it in a Top Secret U.K GCHQ Government program called "Lovely Horse" that aims to monitor hackers online including all of his cyber warfare articles for Unit-123.org including tons of personally identifiable information on the bad guys which could be used for cyber attack attribution and research purposes including a copy of his 2021 compilation on personally identifiable information on the bad guys which is basically a 230 pages OSINT analysis on some of the most prolific and most popular hacking groups and hacking teams internationally including copies of  two of his highly popular research studies on Iran s Hacking Ecosystem including the associated Maltego SNA (Social Network Analysis) graphs. Sample Directory Listing: Dancho_Danchev_Astalavista_Security_Newsletter_Compilation_2021 Dancho_Danchev_Blog_E-Book_Archive_2021 Dancho_Danchev_Cyber_Threat_Actors_Analysis_Research_Compilation_2021 Dancho_Danchev_Cybercrime_Forum_Data_Set_2021 Dancho_Danchev_Cybercrime_Personal_Photos_Ecosystem_2021_Compilation Dancho_Danchev_Cybercrime_Research_2021_Personally_Identifiable_Information_Compilation Dancho_Danchev_Cybercrime_Research_Personal_Photos_Compilation_2021 Dancho_Danchev_Cybercrime_Research_Presentations_2021 Dancho_Danchev_Intelligence_Community_2.0_Dark_Web_Onion_Backup_2021 Dancho_Danchev_Interview_DW_Koobface_Botnet_MP3_2021 Dancho_Danchev_Iran_Hackers_Personally_Identifiable_Information_Compilation_2021 Dancho_Danchev_Iran_White_Paper_2021 Dancho_Danchev_Iran_White_Paper_Part_Two_2021 Dancho_Danchev_Keynote_Koobface_Botnet_CyberCamp_2021 Dancho_Danchev_Malware_Trends_White_Paper_2021 Dancho_Danchev_Medium_Research_Compilation_2021 Dancho_Danchev_Personal_Memoir_Compilation_Research_2021 Dancho_Danchev_Personal_Photos_Compilation_2021 Dancho_Danchev_Private_Party_New_Year_Videos_Compilation Dancho_Danchev_Security_Policy_White_Paper_2021 Dancho_Danchev_Twitter_Account_Archive_2021 Dancho_Danchev_Unit-123_Security_Research_Compilation_2021 Dancho_Danchev_Webroot_Research_Compilation_2021 Dancho_Danchev_WhoisXML_API_Research_Articles_2021 Dancho_Danchev_ZDNet_Research_Compilation_2021 Dancho Danchev can be reached at - https://ddanchev.blogspot.com or at dancho.danchev@hush.com Enjoy!</description>
<size>271700867443</size>
</item><item>
<title>TV Human Interactions Dataset</title>
<category>Dataset</category>
<infohash>0a121ed3969547bbaa1c3df83baaffbae0a717ae</infohash>
<guid>https://academictorrents.com/details/0a121ed3969547bbaa1c3df83baaffbae0a717ae</guid>
<link>https://academictorrents.com/details/0a121ed3969547bbaa1c3df83baaffbae0a717ae</link>
<description>The Interactions Dataset consists of 300 video clips collected from over 20 different TV shows and containing 4 interactions: hand shakes, high fives, hugs, and kisses, as well as clips that don t contain any of the interactions. In each frame of every video are annotated the upper body of people (with a bounding box), the discrete head orientation (profile-left, profile-right, frontal-left, frontal-right and backwards), and the Interaction label of each person.</description>
<size>163856178</size>
</item><item>
<title>Human Pose Evaluator Dataset</title>
<category>Dataset</category>
<infohash>dfabc03d37027d7305468e3da9336bf1c8467cf0</infohash>
<guid>https://academictorrents.com/details/dfabc03d37027d7305468e3da9336bf1c8467cf0</guid>
<link>https://academictorrents.com/details/dfabc03d37027d7305468e3da9336bf1c8467cf0</link>
<description>This dataset is derived from the Hollywood movies and "Buffy, the vampire slayer" series. The images are randomly sampled from the aforementioned videos. All humans with more than three parts visible are annotated with upper body stickmen (6 parts: head, torso, upper and lower arms). Annotations from Hollywood movies: A total of about 6000 frames are sampled from ten Hollywood movies viz., "About a Boy", "Apollo 13", "Four Weddings and a Funeral", "Forrest Gump", "Notting Hill", "Witness", "Gandhi", "Love Actually", "The Graduate", and "Groundhog day". It has 11,639 ground truth annotations. These annotations offers a large variety of poses, people and backgrounds. Annotations from the TV show "Buffy, the vampire slayer": A total of 499 frames are sampled from episodes 2 and 3 of season 5 of this TV show. It contains a total of 755 annotations. This dataset is mainly introduced to add to the numbers of Buffy stickmen dataset.</description>
<size>649314189</size>
</item><item>
<title>McGue, Matt - Behavioral Genetics (MP3 only - full course) - University of Minnesota</title>
<category>Course</category>
<infohash>9195aac598b8bb2f88a5f134b10485d1eac3baaf</infohash>
<guid>https://academictorrents.com/details/9195aac598b8bb2f88a5f134b10485d1eac3baaf</guid>
<link>https://academictorrents.com/details/9195aac598b8bb2f88a5f134b10485d1eac3baaf</link>
<description>https://sites.google.com/umn.edu/behavioralgenetics-mcgue/home-topics</description>
<size>2166667079</size>
</item><item>
<title>Behavioral Genetics (Video only - full course) - Matt McGue - University of Minnesota</title>
<category>Course</category>
<infohash>b0e836a49d690b93104fe93baa954fd0406693fe</infohash>
<guid>https://academictorrents.com/details/b0e836a49d690b93104fe93baa954fd0406693fe</guid>
<link>https://academictorrents.com/details/b0e836a49d690b93104fe93baa954fd0406693fe</link>
<description>https://sites.google.com/umn.edu/behavioralgenetics-mcgue/home-topics</description>
<size>23513370277</size>
</item><item>
<title>gaia_dr3_mvl_v1</title>
<category>Dataset</category>
<infohash>300605cbbcf8fe47851dd68902b40e825f915bcb</infohash>
<guid>https://academictorrents.com/details/300605cbbcf8fe47851dd68902b40e825f915bcb</guid>
<link>https://academictorrents.com/details/300605cbbcf8fe47851dd68902b40e825f915bcb</link>
<description>Gaia DR3 data in MVL format. The MVL stands for Mappable Vector Library and is a file format designed for memory mapping. With a solid state drive, you can map the entire Gaia data into memory and access it at will - even on a small notebook. You can also run parallel computations because the data will be shared between processes running on the same computer. You can find more information and examples at: https://www.atlas.aei.uni-hannover.de/work/volodya/Gaia_dr3/</description>
<size>1469709229888</size>
</item><item>
<title>Behavioral Genetics (full course) - Matt McGue - University of Minnesota</title>
<category>Course</category>
<infohash>e7f9293a40633742f90455d7430fc292a2e8ab2a</infohash>
<guid>https://academictorrents.com/details/e7f9293a40633742f90455d7430fc292a2e8ab2a</guid>
<link>https://academictorrents.com/details/e7f9293a40633742f90455d7430fc292a2e8ab2a</link>
<description>Behavioral Genetics - University of Minnesota Course Material PSY 5137 - Fall 2020 Full introductory course into Behavior Genetics</description>
<size>23748550333</size>
</item><item>
<title>Feescope-Paper-Data</title>
<category>Dataset</category>
<infohash>aa2a3d2f5ff0b3974871db47db57bc3dabf3c192</infohash>
<guid>https://academictorrents.com/details/aa2a3d2f5ff0b3974871db47db57bc3dabf3c192</guid>
<link>https://academictorrents.com/details/aa2a3d2f5ff0b3974871db47db57bc3dabf3c192</link>
<description>We present a novel fluorescence microscope light path that enables imaging, during free behavior, of thousands of neurons in mice and hundreds of neurons in juvenile songbirds. The light path eliminates traditional illumination optics, allowing for head-mounted microscopes that have both a lower weight and a larger field-of-view (FOV) than previously possible. Using this light path, we designed two microscopes: one optimized for field-of-view (∼4 mm FOV; 1.4 g), and the other optimized for weight (1.0 mm FOV; 1.0 g).</description>
<size>54476823281</size>
</item><item>
<title>Hand Dataset</title>
<category>Dataset</category>
<infohash>ddb78dcbe9985b51a397697a6d874b9dbc46300f</infohash>
<guid>https://academictorrents.com/details/ddb78dcbe9985b51a397697a6d874b9dbc46300f</guid>
<link>https://academictorrents.com/details/ddb78dcbe9985b51a397697a6d874b9dbc46300f</link>
<description>We introduce a comprehensive dataset of hand images collected from various different public image data set sources as listed in Table 1. A total of 13050 hand instances are annotated. Hand instances larger than a fixed area of bounding box (1500 sq. pixels) are considered  big  enough for detections and are used for evaluation. This gives around 4170 high quality hand instances. While collecting the data, no restriction was imposed on the pose or visibility of people, nor was any constraint imposed on the environment. In each image, all the hands that can be perceived clearly by humans are annotated. The annotations consist of a bounding rectangle, which does not have to be axis aligned, oriented with respect to the wrist.</description>
<size>250460299</size>
</item><item>
<title>Large-scale CelebFaces Attributes (CelebA) Dataset</title>
<category>Dataset</category>
<infohash>7979c4735621d84c86b1097ad87b5c14f22968a4</infohash>
<guid>https://academictorrents.com/details/7979c4735621d84c86b1097ad87b5c14f22968a4</guid>
<link>https://academictorrents.com/details/7979c4735621d84c86b1097ad87b5c14f22968a4</link>
<description>CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotations, including - 10,177 number of identities, - 202,599 number of face images, and - 5 landmark locations, 40 binary attributes annotations per image. The dataset can be employed as the training and test sets for the following computer vision tasks: face attribute recognition, face recognition, face detection, landmark (or facial part) localization, and face editing &amp; synthesis. https://mmlab.ie.cuhk.edu.hk/projects/CelebA/overview.png # Agreement The CelebA dataset is available for non-commercial research purposes only. All images of the CelebA dataset are obtained from the Internet which are not property of MMLAB, The Chinese University of Hong Kong. The MMLAB is not responsible for the content nor the meaning of these images. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. You agree not to further copy, publish or distribute any portion of the CelebA dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset. The MMLAB reserves the right to terminate your access to the CelebA dataset at any time. The face identities are released upon request for research purposes only. Please contact us for details. # Citation</description>
<size>18171600008</size>
</item><item>
<title>CelebAMask-HQ</title>
<category>Dataset</category>
<infohash>b5738811260a33d02bfb781f7251b15bee7bb987</infohash>
<guid>https://academictorrents.com/details/b5738811260a33d02bfb781f7251b15bee7bb987</guid>
<link>https://academictorrents.com/details/b5738811260a33d02bfb781f7251b15bee7bb987</link>
<description>CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA dataset by following CelebA-HQ. Each image has segmentation mask of facial attributes corresponding to CelebA. The masks of CelebAMask-HQ were manually-annotated with the size of 512 x 512 and 19 classes including all facial components and accessories such as skin, nose, eyes, eyebrows, ears, mouth, lip, hair, hat, eyeglass, earring, necklace, neck, and cloth. CelebAMask-HQ can be used to train and evaluate algorithms of face parsing, face recognition, and GANs for face generation and editing. ## Sample Images https://raw.githubusercontent.com/switchablenorms/CelebAMask-HQ/master/images/sample.png ## Face Manipulation Model with CelebAMask-HQ CelebAMask-HQ can be used on several research fields including: facial image manipulation, face parsing, face recognition, and face hallucination. We showcase an application on interactive facial image manipulation as bellow: * Samples of interactive facial image manipulation ## Related Works * **CelebA** dataset:&lt;br/&gt; Ziwei Liu, Ping Luo, Xiaogang Wang and Xiaoou Tang, "Deep Learning Face Attributes in the Wild", in IEEE International Conference on Computer Vision (ICCV), 2015 * **CelebA-HQ** was collected from CelebA and further post-processed by the following paper :&lt;br/&gt; Karras et. al, "Progressive Growing of GANs for Improved Quality, Stability, and Variation", in Internation Conference on Reoresentation Learning (ICLR), 2018 ## Dataset Agreement * The CelebAMask-HQ dataset is available for **non-commercial research purposes** only. * You agree **not to** reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data. * You agree **not to** further copy, publish or distribute any portion of the CelebAMask-HQ dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset. ## Related Projects using CelebAMask-HQ * [SPADE-TensorFlow](https://github.com/taki0112/SPADE-Tensorflow) * [FaceParsing-PyTorch](https://github.com/zllrunning/face-parsing.PyTorch) ## License and Citation The use of this software is RESTRICTED to **non-commercial research and educational purposes**.</description>
<size>3153930546</size>
</item><item>
<title>Glint360K face recognition dataset</title>
<category>Dataset</category>
<infohash>e5f46ee502b9e76da8cc3a0e4f7c17e4000c7b1e</infohash>
<guid>https://academictorrents.com/details/e5f46ee502b9e76da8cc3a0e4f7c17e4000c7b1e</guid>
<link>https://academictorrents.com/details/e5f46ee502b9e76da8cc3a0e4f7c17e4000c7b1e</link>
<description>Glint360K contains ** 17091657 ** images of ** 360232 ** individuals. By employing the Patial FC training strategy, baseline models trained on Glint360K can easily achieve state-of-the-art performance. Detailed evaluation results on the large-scale test set (e.g. IFRT, IJB-C and Megaface) are as follows: # 1. Evaluation on IFRT ** r ** denotes the sampling rate of negative class centers. | Backbone     | Dataset            | African | Caucasian | Indian | Asian | ALL   | | &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; | &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-        | &amp;mdash;&amp;mdash;- | &amp;mdash;&amp;mdash;- | &amp;mdash;&amp;mdash;&amp;mdash; | &amp;mdash;&amp;mdash;- | &amp;mdash;&amp;mdash;- | | R50          | MS1M-V3            | 76.24 | 86.21 | 84.44  | 37.43 | 71.02 | | R124         | MS1M-V3            | 81.08 | 89.06 | 87.53  | 38.40 | 74.76 |</description>
<size>128583192913</size>
</item><item>
<title>INbreast: toward a full-field digital mammographic database</title>
<category>Dataset</category>
<infohash>ce1ecade37814701ac95193a910a3c6917ea43b3</infohash>
<guid>https://academictorrents.com/details/ce1ecade37814701ac95193a910a3c6917ea43b3</guid>
<link>https://academictorrents.com/details/ce1ecade37814701ac95193a910a3c6917ea43b3</link>
<description>Rationale and objectives: Computer-aided detection and diagnosis (CAD) systems have been developed in the past two decades to assist radiologists in the detection and diagnosis of lesions seen on breast imaging exams, thus providing a second opinion. Mammographic databases play an important role in the development of algorithms aiming at the detection and diagnosis of mammary lesions. However, available databases often do not take into consideration all the requirements needed for research and study purposes. This article aims to present and detail a new mammographic database. Materials and methods: Images were acquired at a breast center located in a university hospital (Centro Hospitalar de S. João [CHSJ], Breast Centre, Porto) with the permission of the Portuguese National Committee of Data Protection and Hospital s Ethics Committee. MammoNovation Siemens full-field digital mammography, with a solid-state detector of amorphous selenium was used. Results: The new database-INbreast-has a total of 115 cases (410 images) from which 90 cases are from women with both breasts affected (four images per case) and 25 cases are from mastectomy patients (two images per case). Several types of lesions (masses, calcifications, asymmetries, and distortions) were included. Accurate contours made by specialists are also provided in XML format. Conclusion: The strengths of the actually presented database-INbreast-relies on the fact that it was built with full-field digital mammograms (in opposition to digitized mammograms), it presents a wide variability of cases, and is made publicly available together with precise annotations. We believe that this database can be a reference for future works centered or related to breast cancer imaging. https://i.imgur.com/3bWtH38.png</description>
<size>2063601019</size>
</item><item>
<title>The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions</title>
<category>Dataset</category>
<infohash>dc3188ee1ce7e2d2254113111b406c484101ba65</infohash>
<guid>https://academictorrents.com/details/dc3188ee1ce7e2d2254113111b406c484101ba65</guid>
<link>https://academictorrents.com/details/dc3188ee1ce7e2d2254113111b406c484101ba65</link>
<description>Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen s disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc). More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The dataset includes lesions with multiple images, which can be tracked by the lesion_id-column within the HAM10000_metadata file. Due to upload size limitations, images are stored in two files: - HAM10000_images_part1.zip (5000 JPEG files) - HAM10000_images_part2.zip (5015 JPEG files) # Additional data for evaluation purposes The HAM10000 dataset served as the training set for the ISIC 2018 challenge (Task 3). The test-set images are available herein as ISIC2018_Task3_Test_Images.zip (1511 images), the official validation-set is available through the challenge website https://challenge2018.isic-archive.com/. The ISIC-Archive also provides a "Live challenge" submission site for continuous evaluation of automated classifiers on the official validation- and test-set. # Comparison to physicians Test-set evaluations of the ISIC 2018 challenge were compared to physicians on an international scale, where the majority of challenge participants outperformed expert readers: Tschandl P. et al., Lancet Oncol 2019 # Human-computer collaboration The test-set images were also used in a study comparing different methods and scenarios of human-computer collaboration: Tschandl P. et al., Nature Medicine 2020 Following corresponding metadata is available herein: - ISIC2018_Task3_Test_NatureMedicine_AI_Interaction_Benefit.csv: Human ratings for Test images with and without interaction with a ResNet34 CNN (Malignancy Probability, Multi-Class probability, CBIR) or Human-Crowd Multi-Class probabilities. This is data was collected for and analyzed in Tschandl P. et al., Nature Medicine 2020, therefore please refer to this publication when using the data. - HAM10000_segmentations_lesion_tschandl.zip: To evaluate regions of CNN activations in Tschandl P. et al., Nature Medicine 2020 (please refer to this publication when using the data), a single dermatologist (Tschandl P) created binary segmentation masks for all 10015 images from the HAM10000 dataset. Masks were initialized with the segmentation network as described by Tschandl et al., Computers in Biology and Medicine 2019, and following verified, corrected or replaced via the free-hand selection tool in FIJI. # Related Publication Tschandl, P., Rosendahl, C. &amp; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 (2018). doi: 10.1038/sdata.2018.161</description>
<size>3203576126</size>
</item><item>
<title>The Oxford-IIIT Pet Dataset</title>
<category>Dataset</category>
<infohash>b18bbd9ba03d50b0f7f479acc9f4228a408cecc1</infohash>
<guid>https://academictorrents.com/details/b18bbd9ba03d50b0f7f479acc9f4228a408cecc1</guid>
<link>https://academictorrents.com/details/b18bbd9ba03d50b0f7f479acc9f4228a408cecc1</link>
<description>We have created a 37 category pet dataset with roughly 200 images for each class. The images have a large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed, head ROI, and pixel level trimap segmentation.</description>
<size>811092049</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2022-06</title>
<category>Dataset</category>
<infohash>0e1813622b3f31570cfe9a6ad3ee8dabffdb8eb6</infohash>
<guid>https://academictorrents.com/details/0e1813622b3f31570cfe9a6ad3ee8dabffdb8eb6</guid>
<link>https://academictorrents.com/details/0e1813622b3f31570cfe9a6ad3ee8dabffdb8eb6</link>
<description>Reddit comments and submissions from 2005-06 to 2022-06 collected by pushshift which can be found here https://files.pushshift.io/reddit/ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>1759433926547</size>
</item><item>
<title>1dsfm</title>
<category>Dataset</category>
<infohash>b50252f761321b1e12f5f16b135c1c9605643ebb</infohash>
<guid>https://academictorrents.com/details/b50252f761321b1e12f5f16b135c1c9605643ebb</guid>
<link>https://academictorrents.com/details/b50252f761321b1e12f5f16b135c1c9605643ebb</link>
<description>We present a simple, effective method for solving structure from motion problems by averaging epipolar geometries. Based on recent successes in solving for global camera rotations using averaging schemes, we focus on the problem of solving for 3D camera translations given a network of noisy pairwise camera translation directions (or 3D point observations). To do this well, we have two main insights. First, we propose a method for removing outliers from problem instances by solving simpler low-dimensional subproblems, which we refer to as 1DSfM problems. Second, we present a simple, principled averaging scheme. We demonstrate this new method in the wild on Internet photo collections. Dataset scraped 23 February 2019 Code and papers scraped 15 July 2022</description>
<size>36328360720</size>
</item><item>
<title>Testing the cybersecurity awareness measures of the employees in large companies by practical exposure to an attack</title>
<category>Paper</category>
<infohash>ac21b33538e65341857e191ebca04126f9ec788a</infohash>
<guid>https://academictorrents.com/details/ac21b33538e65341857e191ebca04126f9ec788a</guid>
<link>https://academictorrents.com/details/ac21b33538e65341857e191ebca04126f9ec788a</link>
<description/>
<size>198461</size>
</item><item>
<title>LDM 100k Dataset</title>
<category>Dataset</category>
<infohash>63aeb864bbe2115ded0aa0d7d36334c026f0660b</infohash>
<guid>https://academictorrents.com/details/63aeb864bbe2115ded0aa0d7d36334c026f0660b</guid>
<link>https://academictorrents.com/details/63aeb864bbe2115ded0aa0d7d36334c026f0660b</link>
<description>AI-generated high-resolution Brain MRI imaging data comprising of 100k subjects, with associated information such as age, sex, and brain size normalised by head size (surrogate of atrophy). The data was generated using a 3D Latent Diffusion Model. The model was trained on the Cambridge-1 Super Computer. This work was supported by the Wellcome Trust, EPSRC, and NVIDIA.</description>
<size>591187107840</size>
</item><item>
<title>Pubmed Baseline 2021-12-12</title>
<category>Dataset</category>
<infohash>cc12294ab7bf1f730738a6bff89052ad8156d8d8</infohash>
<guid>https://academictorrents.com/details/cc12294ab7bf1f730738a6bff89052ad8156d8d8</guid>
<link>https://academictorrents.com/details/cc12294ab7bf1f730738a6bff89052ad8156d8d8</link>
<description>Just the baseline files, no update files. md5 sums included and checked before upload. From https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/README.txt: &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- The PubMed Baseline Repository and Daily Update files Last Updated December 13, 2022 All questions should be directed to: National Center for Biotechnology Information info@ncbi.nlm.nih.gov This document describes the PubMed Database available on the NCBI FTP site under the ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline and ftp://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles directories. PubMed comprises more than 31 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites. Please use a valid email address as your password when you download these data so that we can contact you regarding changes and updates. For the latest information and updates, please subscribe to our listserv at https://www.ncbi.nlm.nih.gov/mailman/listinfo/utilities-announce Please note that the Baseline and Daily Update folders include the citation data (XML) as well as the corresponding .md5 files. Please use the .md5 file to verify the integrity of the XML. Record counts are also included in an HTML file. We do our best to ensure that the counts in the HTML file match the counts in the XML files; however, the .md5 files should be used to check the validity of the XML.  The HTML counts are not intended for this purpose. Baseline Data &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. The complete baseline consists of files pubmed22n0001 through pubmed22n1114. ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline Daily Update Files &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; Each day, NLM produces update files that include new, revised and deleted citations. The first Update file to be loaded after loading the complete set of 2022 MEDLINE/PubMed Baseline files is pubmed22n1115.xml. ftp://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles PubMed DTD &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; http://dtd.nlm.nih.gov/ncbi/pubmed/out/pubmed_190101.dtd Documentation &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- Alphabetical list of elements and their attributes https://dtd.nlm.nih.gov/ncbi/pubmed/doc/out/190101/index.html</description>
<size>37381448042</size>
</item><item>
<title>Yue bi</title>
<category>Dataset</category>
<infohash>7cdb7952ee3961af302a418a61f0c9edb226223f</infohash>
<guid>https://academictorrents.com/details/7cdb7952ee3961af302a418a61f0c9edb226223f</guid>
<link>https://academictorrents.com/details/7cdb7952ee3961af302a418a61f0c9edb226223f</link>
<description/>
<size>147227570</size>
</item><item>
<title>April 2022 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>4dcfdf804775f2d92b7a030305fa0350ebef6f3e</infohash>
<guid>https://academictorrents.com/details/4dcfdf804775f2d92b7a030305fa0350ebef6f3e</guid>
<link>https://academictorrents.com/details/4dcfdf804775f2d92b7a030305fa0350ebef6f3e</link>
<description>Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through April 2022 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org And, if you do use our API, we encourage you to read the section of the documentation on "etiquette". That is, how to use the API without making it impossible for others to use.</description>
<size>167993381070</size>
</item><item>
<title>MLDS-DS3-10000-v1.0</title>
<category>Dataset</category>
<infohash>b2bbaccd349e8e2954a438ced6fc01adae4ea1f1</infohash>
<guid>https://academictorrents.com/details/b2bbaccd349e8e2954a438ced6fc01adae4ea1f1</guid>
<link>https://academictorrents.com/details/b2bbaccd349e8e2954a438ced6fc01adae4ea1f1</link>
<description>Machine Learning Dataset, DS3-10000 v1.0: A dataset for parameter-space analysis of neural networks. See https://www.mlcathome.org/ for more information</description>
<size>1354135939103</size>
</item><item>
<title>MLDS-DS3-5000-v1.0</title>
<category>Dataset</category>
<infohash>c143f1b108fe5ab748d5a6f1ff7b2a6271e4219d</infohash>
<guid>https://academictorrents.com/details/c143f1b108fe5ab748d5a6f1ff7b2a6271e4219d</guid>
<link>https://academictorrents.com/details/c143f1b108fe5ab748d5a6f1ff7b2a6271e4219d</link>
<description>Machine Learning Dataset, DS3-5000 v1.0: A dataset for parameter-space analysis of neural networks. See https://www.mlcathome.org/ for more information</description>
<size>677223180049</size>
</item><item>
<title>MLDS-DS3-1000-v1.0</title>
<category>Dataset</category>
<infohash>647e5460e91e2e105a5085595cd07a51c762f632</infohash>
<guid>https://academictorrents.com/details/647e5460e91e2e105a5085595cd07a51c762f632</guid>
<link>https://academictorrents.com/details/647e5460e91e2e105a5085595cd07a51c762f632</link>
<description>Machine Learning Dataset, DS3-1000 v1.0: A dataset for parameter-space analysis of neural networks. See https://www.mlcathome.org/ for more information</description>
<size>135692914178</size>
</item><item>
<title>MLDS-DS3-500-v1.0</title>
<category>Dataset</category>
<infohash>7e4ddf00bd76ed30eb8bd2bb25c73a61113144a8</infohash>
<guid>https://academictorrents.com/details/7e4ddf00bd76ed30eb8bd2bb25c73a61113144a8</guid>
<link>https://academictorrents.com/details/7e4ddf00bd76ed30eb8bd2bb25c73a61113144a8</link>
<description>Machine Learning Dataset, DS3-500 v1.0: A dataset for parameter-space analysis of neural networks. See https://www.mlcathome.org/ for more information</description>
<size>68002264542</size>
</item><item>
<title>MLDS-DS3-100</title>
<category>Dataset</category>
<infohash>79886df76c240d4936805d23229bf8160531be05</infohash>
<guid>https://academictorrents.com/details/79886df76c240d4936805d23229bf8160531be05</guid>
<link>https://academictorrents.com/details/79886df76c240d4936805d23229bf8160531be05</link>
<description>Machine Learning Dataset, DS3-100 v1.0: A dataset for parameter-space analysis of neural networks.  See https://www.mlcathome.org/ for more information</description>
<size>13849063949</size>
</item><item>
<title>Mars Rover Environmental Monitoring Station Data (2012-2022)</title>
<category>Dataset</category>
<infohash>13fe7880330170e8b9a2712846994bd3252c8384</infohash>
<guid>https://academictorrents.com/details/13fe7880330170e8b9a2712846994bd3252c8384</guid>
<link>https://academictorrents.com/details/13fe7880330170e8b9a2712846994bd3252c8384</link>
<description>"Rover Environmental Monitoring Station (REMS) is a weather station on Mars for Curiosity rover contributed by Spain and Finland. REMS measures humidity, pressure, temperature, wind speeds, and ultraviolet radiation on Mars. This Spanish project is led by the Spanish Astrobiology Center and includes the Finnish Meteorological Institute as a partner, contributing pressure and humidity sensors." Dataset mirrored to Academic Torrents to avoid Kaggle login wall</description>
<size>462887</size>
</item><item>
<title>LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION</title>
<category>Dataset</category>
<infohash>3cd18ff2d3eec881207dcc5ca5a2c3a2a3afe462</infohash>
<guid>https://academictorrents.com/details/3cd18ff2d3eec881207dcc5ca5a2c3a2a3afe462</guid>
<link>https://academictorrents.com/details/3cd18ff2d3eec881207dcc5ca5a2c3a2a3afe462</link>
<description/>
<size>12867952426</size>
</item><item>
<title>MC_GRID</title>
<category>Dataset</category>
<infohash>7f06f8280a3b496f2af0f78131ced619df14a0c3</infohash>
<guid>https://academictorrents.com/details/7f06f8280a3b496f2af0f78131ced619df14a0c3</guid>
<link>https://academictorrents.com/details/7f06f8280a3b496f2af0f78131ced619df14a0c3</link>
<description>Here we release the dataset (Multi_Channel_Grid, abbreviated as MC_Grid) used in our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. MC_Grid, which is based on GRID dataset, includes multi-channel audio, extracted voiceprint and visual feature. And our code is available at https://github.com/aispeech-lab/LiMuSE. Feel free to contact us if you have any questions or suggestions.</description>
<size>18448099391</size>
</item><item>
<title>qvtou1</title>
<category>Dataset</category>
<infohash>d8b6fad07dd7c3812b72a2724810595f60bd36a3</infohash>
<guid>https://academictorrents.com/details/d8b6fad07dd7c3812b72a2724810595f60bd36a3</guid>
<link>https://academictorrents.com/details/d8b6fad07dd7c3812b72a2724810595f60bd36a3</link>
<description>qvtou1</description>
<size>119580</size>
</item><item>
<title>qvtou</title>
<category>Dataset</category>
<infohash>056eab1f29052f4637a8c460f0d6c94c9359856e</infohash>
<guid>https://academictorrents.com/details/056eab1f29052f4637a8c460f0d6c94c9359856e</guid>
<link>https://academictorrents.com/details/056eab1f29052f4637a8c460f0d6c94c9359856e</link>
<description>qvtou</description>
<size>119655</size>
</item><item>
<title>rs_builds.7z</title>
<category>Dataset</category>
<infohash>14c078d20f7f4500943dad2141b936650a6bf528</infohash>
<guid>https://academictorrents.com/details/14c078d20f7f4500943dad2141b936650a6bf528</guid>
<link>https://academictorrents.com/details/14c078d20f7f4500943dad2141b936650a6bf528</link>
<description>Building segmentation in satellite imagery Consists of a combination of several building segmentation datasets (see https://aistudio.baidu.com/aistudio/datasetdetail/102929 for details). Contains ~400K labelled 512x512 patches for building segmentation.</description>
<size>14506971412</size>
</item><item>
<title>Building segmentation in remote sensing imagery - an aggregate of many datasets</title>
<category>Dataset</category>
<infohash>54d938f3b3d71404ebf32e5d89a9cb059f20aaa3</infohash>
<guid>https://academictorrents.com/details/54d938f3b3d71404ebf32e5d89a9cb059f20aaa3</guid>
<link>https://academictorrents.com/details/54d938f3b3d71404ebf32e5d89a9cb059f20aaa3</link>
<description>Building Segmentation in remote sensing imagery  - an aggregate of many datasets Please see: https://aistudio.baidu.com/aistudio/datasetdetail/102929</description>
<size>14949701472</size>
</item><item>
<title>gutenberg-prosody</title>
<category>Dataset</category>
<infohash>7cc66019eccc5ca574727ef90f5030ce7dba8380</infohash>
<guid>https://academictorrents.com/details/7cc66019eccc5ca574727ef90f5030ce7dba8380</guid>
<link>https://academictorrents.com/details/7cc66019eccc5ca574727ef90f5030ce7dba8380</link>
<description>textsWithFilepaths.parquet is a table of all gutenberg texts and their corresponding file names. Each row contains the filename and the full text. You may prefer this format to the gutenberg python/r package, especially if you want to study large sets of texts. You can use the gutenberg-metadata.json for the corresponding title, author, publication year, etc.</description>
<size>11174123041</size>
</item><item>
<title>Crowd Activity Dataset</title>
<category>Dataset</category>
<infohash>49d65f0d512c6d4a23aeafe20cd41f20cf1267be</infohash>
<guid>https://academictorrents.com/details/49d65f0d512c6d4a23aeafe20cd41f20cf1267be</guid>
<link>https://academictorrents.com/details/49d65f0d512c6d4a23aeafe20cd41f20cf1267be</link>
<description/>
<size>3874806272</size>
</item><item>
<title>FABDEM</title>
<category>Dataset</category>
<infohash>a3ce54e3de8177011a3c6e1b9ed130e0467c3f4e</infohash>
<guid>https://academictorrents.com/details/a3ce54e3de8177011a3c6e1b9ed130e0467c3f4e</guid>
<link>https://academictorrents.com/details/a3ce54e3de8177011a3c6e1b9ed130e0467c3f4e</link>
<description>FABDEM (Forest And Buildings removed Copernicus DEM) is a global elevation map that removes building and tree height biases from the Copernicus GLO 30 Digital Elevation Model (DEM). The data is available at 1 arc second grid spacing (approximately 30m at the equator) for the globe. The FABDEM dataset is licensed under a Creative Commons "CC BY-NC-SA 4.0" license. For commercial use queries, please contact fabdem@fathom.global This dataset is published in support of the paper "A 30 m global map of elevation with forests and buildings removed" published by IOP in Environmental Research Letters at https://dx.doi.org/10.1088/1748-9326/ac4d4f.</description>
<size>496392754974</size>
</item><item>
<title>JSEP2021-models</title>
<category>Paper</category>
<infohash>6716b2ce1375f077a7810879c1ac65dc8673cc07</infohash>
<guid>https://academictorrents.com/details/6716b2ce1375f077a7810879c1ac65dc8673cc07</guid>
<link>https://academictorrents.com/details/6716b2ce1375f077a7810879c1ac65dc8673cc07</link>
<description/>
<size>72962015759</size>
</item><item>
<title>UC Berkeley CS61C Great Ideas In Computer Architecture</title>
<category>Course</category>
<infohash>7f53b1ae54fe80b6c98b4e263e59f5b08061000c</infohash>
<guid>https://academictorrents.com/details/7f53b1ae54fe80b6c98b4e263e59f5b08061000c</guid>
<link>https://academictorrents.com/details/7f53b1ae54fe80b6c98b4e263e59f5b08061000c</link>
<description>The subjects covered in this course include: C and assembly language programming, translation of high-level programs into machine language, computer organization, caches, performance measurement, parallelism, CPU design, warehouse-scale computing, and related topics.</description>
<size>748852727</size>
</item><item>
<title>rapppid_dataset</title>
<category>Dataset</category>
<infohash>34079b029c6a8230f196593164e3fab8956e9ee5</infohash>
<guid>https://academictorrents.com/details/34079b029c6a8230f196593164e3fab8956e9ee5</guid>
<link>https://academictorrents.com/details/34079b029c6a8230f196593164e3fab8956e9ee5</link>
<description>Motivation Computational methods for the prediction of protein-protein interactions, while important tools for researchers, are plagued by challenges in generalising to unseen proteins. Datasets used for modelling protein-protein predictions are particularly predisposed to information leakage and sampling biases. Results In this study, we introduce RAPPPID, a method for the Regularised Automatic Prediction of Protein-Protein Interactions using Deep Learning. RAPPPID is a twin AWD-LSTM network which employs multiple regularisation methods during training time to learn generalised weights. Testing on stringent interaction datasets composed of proteins not seen during training, RAPPPID outperforms state-of-the-art methods. Further experiments show that RAPPPID’s performance holds regardless of the particular proteins in the testing set and its performance is higher for biologically supported edges. This study serves to demonstrate that appropriate regularisation is an important component of overcoming the challenges of creating models for protein-protein interaction prediction that generalise to unseen proteins. Availability and Implementation Code and datasets are freely available at https://github.com/jszym/rapppid. Contact amin.emadatmcgill.ca Supplementary Information Online-only supplementary data is available at the journal’s website. Competing Interest Statement The authors have declared no competing interest.</description>
<size>62697815</size>
</item><item>
<title>wikidata-20220103-all.json.gz</title>
<category>Dataset</category>
<infohash>229cfeb2331ad43d4706efd435f6d78f40a3c438</infohash>
<guid>https://academictorrents.com/details/229cfeb2331ad43d4706efd435f6d78f40a3c438</guid>
<link>https://academictorrents.com/details/229cfeb2331ad43d4706efd435f6d78f40a3c438</link>
<description>Dump of Wikidata of January 3rd 2022.</description>
<size>109042925619</size>
</item><item>
<title>Open License Music 2008-2013 (Jamendo)</title>
<category>Dataset</category>
<infohash>3dc75c809c528b32b77d3081af3542892c85529b</infohash>
<guid>https://academictorrents.com/details/3dc75c809c528b32b77d3081af3542892c85529b</guid>
<link>https://academictorrents.com/details/3dc75c809c528b32b77d3081af3542892c85529b</link>
<description>Music with an open license can be useful for making audiobooks and videos, and can also be used for scientific purposes. Music under such open licenses (Creative Commons BY, CC BY-SA, and so on) have been selected that permit modification of the work (derivative work) and its commercial use. But it is necessary to indicate the attribution, the author s name and the title of the musical work. These songs were downloaded from the Jamendo website in 2008-2013. The site stated that these sound files have various open source licenses. Each folder in this collection contains a text file indicating the license. Audio bitrate: 128-320 kbps. Duration: 147 hours. There are 1477 directories, 7317 files. These music files were used in some files of Spoken Wikipedia 2018 (https://academictorrents.com/details/5d2a7304089a97cecb5de3f055495ed65013c968).</description>
<size>12939554230</size>
</item><item>
<title>7_interrogazione_base_dati</title>
<category>Course</category>
<infohash>dd0fb84cbe5451ae2bc76aa8a6cfa14c8e57296d</infohash>
<guid>https://academictorrents.com/details/dd0fb84cbe5451ae2bc76aa8a6cfa14c8e57296d</guid>
<link>https://academictorrents.com/details/dd0fb84cbe5451ae2bc76aa8a6cfa14c8e57296d</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Interrogazione di una base di dati C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>568703615</size>
</item><item>
<title>LSA64-WOODS</title>
<category>Dataset</category>
<infohash>704bf5981eb337cae7cb518c3abb9d7b6bdf3e49</infohash>
<guid>https://academictorrents.com/details/704bf5981eb337cae7cb518c3abb9d7b6bdf3e49</guid>
<link>https://academictorrents.com/details/704bf5981eb337cae7cb518c3abb9d7b6bdf3e49</link>
<description/>
<size>271398527</size>
</item><item>
<title>PCL-WOODS</title>
<category>Dataset</category>
<infohash>e8b0a24177988f9c3f8c3c63a8212546f67a25a3</infohash>
<guid>https://academictorrents.com/details/e8b0a24177988f9c3f8c3c63a8212546f67a25a3</guid>
<link>https://academictorrents.com/details/e8b0a24177988f9c3f8c3c63a8212546f67a25a3</link>
<description/>
<size>3024167345</size>
</item><item>
<title>SEDFx-WOODS</title>
<category>Dataset</category>
<infohash>58ea303dce39ffe822bec7704f9eb65e4173defd</infohash>
<guid>https://academictorrents.com/details/58ea303dce39ffe822bec7704f9eb65e4173defd</guid>
<link>https://academictorrents.com/details/58ea303dce39ffe822bec7704f9eb65e4173defd</link>
<description/>
<size>10682829874</size>
</item><item>
<title>CAP-WOODS</title>
<category>Dataset</category>
<infohash>500d0c473108ef72e01b0f8037251b09331467f9</infohash>
<guid>https://academictorrents.com/details/500d0c473108ef72e01b0f8037251b09331467f9</guid>
<link>https://academictorrents.com/details/500d0c473108ef72e01b0f8037251b09331467f9</link>
<description/>
<size>8658578746</size>
</item><item>
<title>HHAR-WOODS</title>
<category>Dataset</category>
<infohash>f48f38de06b3cd560fb90307b5a1997a12bcc29c</infohash>
<guid>https://academictorrents.com/details/f48f38de06b3cd560fb90307b5a1997a12bcc29c</guid>
<link>https://academictorrents.com/details/f48f38de06b3cd560fb90307b5a1997a12bcc29c</link>
<description/>
<size>151730350</size>
</item><item>
<title>6_creazione_base_dati</title>
<category>Course</category>
<infohash>aa8bb2e63d89fc077275a90113acc011b8af48a7</infohash>
<guid>https://academictorrents.com/details/aa8bb2e63d89fc077275a90113acc011b8af48a7</guid>
<link>https://academictorrents.com/details/aa8bb2e63d89fc077275a90113acc011b8af48a7</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Creazione di una base di dati C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>494332634</size>
</item><item>
<title>5_progettazione_logica</title>
<category>Course</category>
<infohash>67521dab1525a4f20deefe0c5622152fdfe37e4d</infohash>
<guid>https://academictorrents.com/details/67521dab1525a4f20deefe0c5622152fdfe37e4d</guid>
<link>https://academictorrents.com/details/67521dab1525a4f20deefe0c5622152fdfe37e4d</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan La progettazione logica C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>426062688</size>
</item><item>
<title>4_modello_relazionale</title>
<category>Course</category>
<infohash>6d7bf1bbedaf5ae8d1825d0969e54106d67e6885</infohash>
<guid>https://academictorrents.com/details/6d7bf1bbedaf5ae8d1825d0969e54106d67e6885</guid>
<link>https://academictorrents.com/details/6d7bf1bbedaf5ae8d1825d0969e54106d67e6885</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Il modello relazionale C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>398136791</size>
</item><item>
<title>3_progettazione_concettuale</title>
<category>Course</category>
<infohash>86367a0d2ab41dfe09c1e6b595f7d2ec96371c70</infohash>
<guid>https://academictorrents.com/details/86367a0d2ab41dfe09c1e6b595f7d2ec96371c70</guid>
<link>https://academictorrents.com/details/86367a0d2ab41dfe09c1e6b595f7d2ec96371c70</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan La progettazione concettuale C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>461670847</size>
</item><item>
<title>2_semantica_dei_dati</title>
<category>Course</category>
<infohash>070fa65de0e8634a555aba388b43f545a792125f</infohash>
<guid>https://academictorrents.com/details/070fa65de0e8634a555aba388b43f545a792125f</guid>
<link>https://academictorrents.com/details/070fa65de0e8634a555aba388b43f545a792125f</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Semantica dei dati: Ontologie e rappresentazione della conoscenza C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici, Università degli studi di Milano Martin Ruskov, martin.ruskov@unimi.it</description>
<size>433245148</size>
</item><item>
<title>1_dati_informazioni_sistemi_informativi</title>
<category>Course</category>
<infohash>d9ffba517be4c1153322b530ed1e45e51c207611</infohash>
<guid>https://academictorrents.com/details/d9ffba517be4c1153322b530ed1e45e51c207611</guid>
<link>https://academictorrents.com/details/d9ffba517be4c1153322b530ed1e45e51c207611</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Dati, Informazioni, Sistemi Informativi C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici Università degli studi di Milano Martin Ruskov martin.ruskov@unimi.it</description>
<size>432041975</size>
</item><item>
<title>0_cose_linformatica</title>
<category>Course</category>
<infohash>642bad0fee3d261a6a5a521ebfb4318bd65d3e03</infohash>
<guid>https://academictorrents.com/details/642bad0fee3d261a6a5a521ebfb4318bd65d3e03</guid>
<link>https://academictorrents.com/details/642bad0fee3d261a6a5a521ebfb4318bd65d3e03</link>
<description>Slides and video recordings of first lecture in Italian of a lab held at the University of Milan Cos è l informatica? Come può servire uno storico? C75-870: Nozioni d informatica per gli storici Dipartimento di studi storici Università degli studi di Milano Martin Ruskov martin.ruskov@unimi.it</description>
<size>538939856</size>
</item><item>
<title>Synthetic Data for Text Localisation in Natural Images</title>
<category>Dataset</category>
<infohash>2dba9518166cbd141534cbf381aa3e99a087e83c</infohash>
<guid>https://academictorrents.com/details/2dba9518166cbd141534cbf381aa3e99a087e83c</guid>
<link>https://academictorrents.com/details/2dba9518166cbd141534cbf381aa3e99a087e83c</link>
<description>This is a synthetically generated dataset, in which word instances are placed in natural scene images, while taking into account the scene layout. The dataset consists of *800 thousand* images with approximately *8 million* synthetic word instances. Each text instance is annotated with its text-string, word-level and character-level bounding-boxes.</description>
<size>73499997703</size>
</item><item>
<title>Reading Text in the Wild with Convolutional Neural Networks</title>
<category>Dataset</category>
<infohash>3d0b4f09080703d2a9c6be50715b46389fdb3af1</infohash>
<guid>https://academictorrents.com/details/3d0b4f09080703d2a9c6be50715b46389fdb3af1</guid>
<link>https://academictorrents.com/details/3d0b4f09080703d2a9c6be50715b46389fdb3af1</link>
<description>The exact data used to train our deep convolutional neural networks (see our [research page](http://www.robots.ox.ac.uk/~vgg/research/text/)) is included in this torrent. This is synthetically generated dataset which we found sufficient for training text recognition on real-world images ![Synthetic Data Engine processt](https://i.imgur.com/cqmgbUa.png) This dataset consists of *9 million images* covering *90k English words*, and includes the training, validation and test splits used in our work.</description>
<size>10678411583</size>
</item><item>
<title>zipfile</title>
<category>Dataset</category>
<infohash>260159f804a5995d8da7320e5629fb12ed29c2de</infohash>
<guid>https://academictorrents.com/details/260159f804a5995d8da7320e5629fb12ed29c2de</guid>
<link>https://academictorrents.com/details/260159f804a5995d8da7320e5629fb12ed29c2de</link>
<description>This is a dataset collected from the youtube channel movieclips.</description>
<size>17367605029</size>
</item><item>
<title>GeneralIndex.ngrams.f</title>
<category>Paper</category>
<infohash>3b6446e0cffa2137bd7ff555b307469fbb74eae9</infohash>
<guid>https://academictorrents.com/details/3b6446e0cffa2137bd7ff555b307469fbb74eae9</guid>
<link>https://academictorrents.com/details/3b6446e0cffa2137bd7ff555b307469fbb74eae9</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.f Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.f/GeneralIndex.ngrams.f_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.f_meta.xml contains metadata about this torrent s contents.</description>
<size>281978863616</size>
</item><item>
<title>GeneralIndex.ngrams.e</title>
<category>Paper</category>
<infohash>bd84d4a28059254b36271f8dd2748ca1dcd57daa</infohash>
<guid>https://academictorrents.com/details/bd84d4a28059254b36271f8dd2748ca1dcd57daa</guid>
<link>https://academictorrents.com/details/bd84d4a28059254b36271f8dd2748ca1dcd57daa</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.e Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.e/GeneralIndex.ngrams.e_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.e_meta.xml contains metadata about this torrent s contents.</description>
<size>282012418048</size>
</item><item>
<title>GeneralIndex.ngrams.d</title>
<category>Paper</category>
<infohash>6ae42d41eec43f2742e5dd6dd8411fc4c75584ba</infohash>
<guid>https://academictorrents.com/details/6ae42d41eec43f2742e5dd6dd8411fc4c75584ba</guid>
<link>https://academictorrents.com/details/6ae42d41eec43f2742e5dd6dd8411fc4c75584ba</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.d Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.d/GeneralIndex.ngrams.d_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.d_meta.xml contains metadata about this torrent s contents.</description>
<size>282335379456</size>
</item><item>
<title>GeneralIndex.ngrams.b</title>
<category>Paper</category>
<infohash>72919dcfa99557476367a176fcc60c3e7d06811c</infohash>
<guid>https://academictorrents.com/details/72919dcfa99557476367a176fcc60c3e7d06811c</guid>
<link>https://academictorrents.com/details/72919dcfa99557476367a176fcc60c3e7d06811c</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.b Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.b/GeneralIndex.ngrams.b_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.b_meta.xml contains metadata about this torrent s contents.</description>
<size>292208771072</size>
</item><item>
<title>GeneralIndex.ngrams.a</title>
<category>Paper</category>
<infohash>aa54b8fd21c8bb7cbe3787b6244334f1816b23ab</infohash>
<guid>https://academictorrents.com/details/aa54b8fd21c8bb7cbe3787b6244334f1816b23ab</guid>
<link>https://academictorrents.com/details/aa54b8fd21c8bb7cbe3787b6244334f1816b23ab</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.a Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.a/GeneralIndex.ngrams.a_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.a_meta.xml contains metadata about this torrent s contents.</description>
<size>300601573376</size>
</item><item>
<title>GeneralIndex.ngrams.9</title>
<category>Paper</category>
<infohash>79c378fe13187ec137571a64b8a20fe991751880</infohash>
<guid>https://academictorrents.com/details/79c378fe13187ec137571a64b8a20fe991751880</guid>
<link>https://academictorrents.com/details/79c378fe13187ec137571a64b8a20fe991751880</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.9 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.9/GeneralIndex.ngrams.9_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.9_meta.xml contains metadata about this torrent s contents.</description>
<size>302979743744</size>
</item><item>
<title>GeneralIndex.ngrams.8</title>
<category>Paper</category>
<infohash>7ab56ac0c1109e0487b4b9c1d22d2030ea28ca87</infohash>
<guid>https://academictorrents.com/details/7ab56ac0c1109e0487b4b9c1d22d2030ea28ca87</guid>
<link>https://academictorrents.com/details/7ab56ac0c1109e0487b4b9c1d22d2030ea28ca87</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.8 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.8/GeneralIndex.ngrams.8_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.8_meta.xml contains metadata about this torrent s contents.</description>
<size>309359280128</size>
</item><item>
<title>GeneralIndex.ngrams.7</title>
<category>Paper</category>
<infohash>7c98f655b1c8a2c1b8bcd2b7981c66680a1b4ef0</infohash>
<guid>https://academictorrents.com/details/7c98f655b1c8a2c1b8bcd2b7981c66680a1b4ef0</guid>
<link>https://academictorrents.com/details/7c98f655b1c8a2c1b8bcd2b7981c66680a1b4ef0</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.7 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.7/GeneralIndex.ngrams.7_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.7_meta.xml contains metadata about this torrent s contents.</description>
<size>304716185600</size>
</item><item>
<title>GeneralIndex.ngrams.6</title>
<category>Paper</category>
<infohash>e6caa05b1cbbd2fd7f6ae848e82049c8c972f1ff</infohash>
<guid>https://academictorrents.com/details/e6caa05b1cbbd2fd7f6ae848e82049c8c972f1ff</guid>
<link>https://academictorrents.com/details/e6caa05b1cbbd2fd7f6ae848e82049c8c972f1ff</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.6 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.6/GeneralIndex.ngrams.6_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.6_meta.xml contains metadata about this torrent s contents.</description>
<size>303902490624</size>
</item><item>
<title>GeneralIndex.ngrams.5</title>
<category>Paper</category>
<infohash>8b776dd89c92bd86722ca1335941a2081030be27</infohash>
<guid>https://academictorrents.com/details/8b776dd89c92bd86722ca1335941a2081030be27</guid>
<link>https://academictorrents.com/details/8b776dd89c92bd86722ca1335941a2081030be27</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.5 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.5/GeneralIndex.ngrams.5_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.5_meta.xml contains metadata about this torrent s contents.</description>
<size>305009786880</size>
</item><item>
<title>GeneralIndex.ngrams.4</title>
<category>Paper</category>
<infohash>338d3c506ab7dee919a8049b42cd3b43488e23d6</infohash>
<guid>https://academictorrents.com/details/338d3c506ab7dee919a8049b42cd3b43488e23d6</guid>
<link>https://academictorrents.com/details/338d3c506ab7dee919a8049b42cd3b43488e23d6</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.4 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.4/GeneralIndex.ngrams.4_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.4_meta.xml contains metadata about this torrent s contents.</description>
<size>306649759744</size>
</item><item>
<title>GeneralIndex.ngrams.3</title>
<category>Paper</category>
<infohash>1b4bf0c6021401a2319f84ac5ee5d4704b083c47</infohash>
<guid>https://academictorrents.com/details/1b4bf0c6021401a2319f84ac5ee5d4704b083c47</guid>
<link>https://academictorrents.com/details/1b4bf0c6021401a2319f84ac5ee5d4704b083c47</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.3 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.3/GeneralIndex.ngrams.3_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.3_meta.xml contains metadata about this torrent s contents.</description>
<size>306276466688</size>
</item><item>
<title>GeneralIndex.ngrams.2</title>
<category>Paper</category>
<infohash>406839f439863bd8d1efa2c3333592f45ba4114e</infohash>
<guid>https://academictorrents.com/details/406839f439863bd8d1efa2c3333592f45ba4114e</guid>
<link>https://academictorrents.com/details/406839f439863bd8d1efa2c3333592f45ba4114e</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.2 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.2/GeneralIndex.ngrams.2_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.2_meta.xml contains metadata about this torrent s contents.</description>
<size>309162147840</size>
</item><item>
<title>GeneralIndex.ngrams.1</title>
<category>Paper</category>
<infohash>6b2bddc4d7ae6e069a3205e798dfb5fec9109865</infohash>
<guid>https://academictorrents.com/details/6b2bddc4d7ae6e069a3205e798dfb5fec9109865</guid>
<link>https://academictorrents.com/details/6b2bddc4d7ae6e069a3205e798dfb5fec9109865</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.1 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.1/GeneralIndex.ngrams.1_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.1_meta.xml contains metadata about this torrent s contents.</description>
<size>304506470400</size>
</item><item>
<title>GeneralIndex.ngrams.0</title>
<category>Paper</category>
<infohash>26047a785d32bae8718b9ea47ce3ccc1407b76a2</infohash>
<guid>https://academictorrents.com/details/26047a785d32bae8718b9ea47ce3ccc1407b76a2</guid>
<link>https://academictorrents.com/details/26047a785d32bae8718b9ea47ce3ccc1407b76a2</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.0 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.0/GeneralIndex.ngrams.0_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.0_meta.xml contains metadata about this torrent s contents.</description>
<size>307568312320</size>
</item><item>
<title>GeneralIndex.ngrams.c</title>
<category>Paper</category>
<infohash>bc0a5ad1da4439467f63a3ea831acb4acbac2152</infohash>
<guid>https://academictorrents.com/details/bc0a5ad1da4439467f63a3ea831acb4acbac2152</guid>
<link>https://academictorrents.com/details/bc0a5ad1da4439467f63a3ea831acb4acbac2152</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.c Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.ngrams.c/GeneralIndex.ngrams.c_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.ngrams.c_meta.xml contains metadata about this torrent s contents.</description>
<size>284453502976</size>
</item><item>
<title>GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model</title>
<category>Dataset</category>
<infohash>feb5891fd364f357b03a9ebbf3b7d83a0aabe1ec</infohash>
<guid>https://academictorrents.com/details/feb5891fd364f357b03a9ebbf3b7d83a0aabe1ec</guid>
<link>https://academictorrents.com/details/feb5891fd364f357b03a9ebbf3b7d83a0aabe1ec</link>
<description/>
<size>9424601088</size>
</item><item>
<title>GeneralIndex.ngrams.0</title>
<category>Paper</category>
<infohash>54d8bec36cc4c993c1880ad751fb29dbd3eaa454</infohash>
<guid>https://academictorrents.com/details/54d8bec36cc4c993c1880ad751fb29dbd3eaa454</guid>
<link>https://academictorrents.com/details/54d8bec36cc4c993c1880ad751fb29dbd3eaa454</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.0</description>
<size>307547170715</size>
</item><item>
<title>GeneralIndex.ngrams.9</title>
<category>Paper</category>
<infohash>310f0e0be7a776f353f1ad5eca836d83c501bf52</infohash>
<guid>https://academictorrents.com/details/310f0e0be7a776f353f1ad5eca836d83c501bf52</guid>
<link>https://academictorrents.com/details/310f0e0be7a776f353f1ad5eca836d83c501bf52</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.9</description>
<size>302958408450</size>
</item><item>
<title>GeneralIndex.ngrams.4</title>
<category>Dataset</category>
<infohash>4a5d3d16fb656d727404040905687318f352c47b</infohash>
<guid>https://academictorrents.com/details/4a5d3d16fb656d727404040905687318f352c47b</guid>
<link>https://academictorrents.com/details/4a5d3d16fb656d727404040905687318f352c47b</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.4</description>
<size>306627899488</size>
</item><item>
<title>GeneralIndex.ngrams.8</title>
<category>Dataset</category>
<infohash>4d235eefa8296cd1c4cb0834927bc286545cb77c</infohash>
<guid>https://academictorrents.com/details/4d235eefa8296cd1c4cb0834927bc286545cb77c</guid>
<link>https://academictorrents.com/details/4d235eefa8296cd1c4cb0834927bc286545cb77c</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.8</description>
<size>309334836587</size>
</item><item>
<title>GeneralIndex.ngrams.f</title>
<category>Paper</category>
<infohash>12cb82e1f88b5b762af382c33a8e029fdc314f78</infohash>
<guid>https://academictorrents.com/details/12cb82e1f88b5b762af382c33a8e029fdc314f78</guid>
<link>https://academictorrents.com/details/12cb82e1f88b5b762af382c33a8e029fdc314f78</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.f</description>
<size>281955768718</size>
</item><item>
<title>GeneralIndex.ngrams.e</title>
<category>Paper</category>
<infohash>2cd89712ae6e40333c1a69c933168b75601cf367</infohash>
<guid>https://academictorrents.com/details/2cd89712ae6e40333c1a69c933168b75601cf367</guid>
<link>https://academictorrents.com/details/2cd89712ae6e40333c1a69c933168b75601cf367</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.e</description>
<size>281987750789</size>
</item><item>
<title>GeneralIndex.ngrams.7</title>
<category>Paper</category>
<infohash>d021d69e67f1f546770a884d211cc0aa0247d5a3</infohash>
<guid>https://academictorrents.com/details/d021d69e67f1f546770a884d211cc0aa0247d5a3</guid>
<link>https://academictorrents.com/details/d021d69e67f1f546770a884d211cc0aa0247d5a3</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.7</description>
<size>304695184301</size>
</item><item>
<title>GeneralIndex.ngrams.d</title>
<category>Paper</category>
<infohash>6a46fea834f0773d0e50b17f78037cf9a4ab02ea</infohash>
<guid>https://academictorrents.com/details/6a46fea834f0773d0e50b17f78037cf9a4ab02ea</guid>
<link>https://academictorrents.com/details/6a46fea834f0773d0e50b17f78037cf9a4ab02ea</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.d</description>
<size>282311760491</size>
</item><item>
<title>GeneralIndex.ngrams.b</title>
<category>Paper</category>
<infohash>315a054d741577e6ff2b1a5c6e0d416949945fb9</infohash>
<guid>https://academictorrents.com/details/315a054d741577e6ff2b1a5c6e0d416949945fb9</guid>
<link>https://academictorrents.com/details/315a054d741577e6ff2b1a5c6e0d416949945fb9</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.b</description>
<size>292186875957</size>
</item><item>
<title>GeneralIndex.ngrams.6</title>
<category>Paper</category>
<infohash>225e81c64b8029e8d497f62004c5fd34fe8fddf8</infohash>
<guid>https://academictorrents.com/details/225e81c64b8029e8d497f62004c5fd34fe8fddf8</guid>
<link>https://academictorrents.com/details/225e81c64b8029e8d497f62004c5fd34fe8fddf8</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.6</description>
<size>303879612379</size>
</item><item>
<title>GeneralIndex.ngrams.5</title>
<category>Paper</category>
<infohash>ce2bb88be9d5c74fd76c4aef16121a073c4108e0</infohash>
<guid>https://academictorrents.com/details/ce2bb88be9d5c74fd76c4aef16121a073c4108e0</guid>
<link>https://academictorrents.com/details/ce2bb88be9d5c74fd76c4aef16121a073c4108e0</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.5</description>
<size>304985273232</size>
</item><item>
<title>GeneralIndex.ngrams.a</title>
<category>Dataset</category>
<infohash>d77c823286242b1e5f9df203dbc31b29ce14fdd5</infohash>
<guid>https://academictorrents.com/details/d77c823286242b1e5f9df203dbc31b29ce14fdd5</guid>
<link>https://academictorrents.com/details/d77c823286242b1e5f9df203dbc31b29ce14fdd5</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.a</description>
<size>300577671125</size>
</item><item>
<title>GeneralIndex.ngrams.1</title>
<category>Dataset</category>
<infohash>975e291a08f913a5bcf9ab67a58743288decd618</infohash>
<guid>https://academictorrents.com/details/975e291a08f913a5bcf9ab67a58743288decd618</guid>
<link>https://academictorrents.com/details/975e291a08f913a5bcf9ab67a58743288decd618</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.1</description>
<size>304483148765</size>
</item><item>
<title>GeneralIndex.ngrams.2</title>
<category>Paper</category>
<infohash>b9dda5f60e08b8a7feb28a5591efe78f948ec77c</infohash>
<guid>https://academictorrents.com/details/b9dda5f60e08b8a7feb28a5591efe78f948ec77c</guid>
<link>https://academictorrents.com/details/b9dda5f60e08b8a7feb28a5591efe78f948ec77c</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.2</description>
<size>309139733637</size>
</item><item>
<title>GeneralIndex.ngrams.3</title>
<category>Paper</category>
<infohash>60ed6cb72a1be8159fe789c85093296f6c43ccff</infohash>
<guid>https://academictorrents.com/details/60ed6cb72a1be8159fe789c85093296f6c43ccff</guid>
<link>https://academictorrents.com/details/60ed6cb72a1be8159fe789c85093296f6c43ccff</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.ngrams.3</description>
<size>306252280275</size>
</item><item>
<title>GeneralIndex.keywords.f</title>
<category>Paper</category>
<infohash>0fe3ddc0ffe0aefe68f7780d7ff910e2d0caf841</infohash>
<guid>https://academictorrents.com/details/0fe3ddc0ffe0aefe68f7780d7ff910e2d0caf841</guid>
<link>https://academictorrents.com/details/0fe3ddc0ffe0aefe68f7780d7ff910e2d0caf841</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.f Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.f/GeneralIndex.keywords.f_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.f_meta.xml contains metadata about this torrent s contents.</description>
<size>22695378944</size>
</item><item>
<title>GeneralIndex.keywords.e</title>
<category>Paper</category>
<infohash>16a5cec5da87713a46aa2d646a256da761f6b1f4</infohash>
<guid>https://academictorrents.com/details/16a5cec5da87713a46aa2d646a256da761f6b1f4</guid>
<link>https://academictorrents.com/details/16a5cec5da87713a46aa2d646a256da761f6b1f4</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.e Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.e/GeneralIndex.keywords.e_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.e_meta.xml contains metadata about this torrent s contents.</description>
<size>22695378944</size>
</item><item>
<title>GeneralIndex.keywords.d</title>
<category>Paper</category>
<infohash>1139717e48d92944932dc6217d6ffa2d381534c1</infohash>
<guid>https://academictorrents.com/details/1139717e48d92944932dc6217d6ffa2d381534c1</guid>
<link>https://academictorrents.com/details/1139717e48d92944932dc6217d6ffa2d381534c1</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.d Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.d/GeneralIndex.keywords.d_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.d_meta.xml contains metadata about this torrent s contents.</description>
<size>22733127680</size>
</item><item>
<title>GeneralIndex.keywords.c</title>
<category>Paper</category>
<infohash>84d47bfc68a45f7e4c06beac1248c659668dd312</infohash>
<guid>https://academictorrents.com/details/84d47bfc68a45f7e4c06beac1248c659668dd312</guid>
<link>https://academictorrents.com/details/84d47bfc68a45f7e4c06beac1248c659668dd312</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.c Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.c/GeneralIndex.keywords.c_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.c_meta.xml contains metadata about this torrent s contents.</description>
<size>22858956800</size>
</item><item>
<title>GeneralIndex.keywords.b</title>
<category>Paper</category>
<infohash>34094874b83e08ed8cd52940f7d5b52cb6868173</infohash>
<guid>https://academictorrents.com/details/34094874b83e08ed8cd52940f7d5b52cb6868173</guid>
<link>https://academictorrents.com/details/34094874b83e08ed8cd52940f7d5b52cb6868173</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.b Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.b/GeneralIndex.keywords.b_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.b_meta.xml contains metadata about this torrent s contents.</description>
<size>23462936576</size>
</item><item>
<title>GeneralIndex.keywords.a</title>
<category>Paper</category>
<infohash>82f017a44a981e56d16d59f1e09719e537c864bb</infohash>
<guid>https://academictorrents.com/details/82f017a44a981e56d16d59f1e09719e537c864bb</guid>
<link>https://academictorrents.com/details/82f017a44a981e56d16d59f1e09719e537c864bb</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.a Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.a/GeneralIndex.keywords.a_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.a_meta.xml contains metadata about this torrent s contents.</description>
<size>24033361920</size>
</item><item>
<title>GeneralIndex.keywords.9</title>
<category>Paper</category>
<infohash>0cd9ece1aaacbada8180fd0dfb9138a445dd56d6</infohash>
<guid>https://academictorrents.com/details/0cd9ece1aaacbada8180fd0dfb9138a445dd56d6</guid>
<link>https://academictorrents.com/details/0cd9ece1aaacbada8180fd0dfb9138a445dd56d6</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.9 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.9/GeneralIndex.keywords.9_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.9_meta.xml contains metadata about this torrent s contents.</description>
<size>24247271424</size>
</item><item>
<title>GeneralIndex.keywords.8</title>
<category>Paper</category>
<infohash>4464cb30f91904ba4d795219aa17b7b3fe0ac6ff</infohash>
<guid>https://academictorrents.com/details/4464cb30f91904ba4d795219aa17b7b3fe0ac6ff</guid>
<link>https://academictorrents.com/details/4464cb30f91904ba4d795219aa17b7b3fe0ac6ff</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.8 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.8/GeneralIndex.keywords.8_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.8_meta.xml contains metadata about this torrent s contents.</description>
<size>24431820800</size>
</item><item>
<title>GeneralIndex.keywords.7</title>
<category>Paper</category>
<infohash>63e641f6072ad23c39bd0a11b791ba7d919d373b</infohash>
<guid>https://academictorrents.com/details/63e641f6072ad23c39bd0a11b791ba7d919d373b</guid>
<link>https://academictorrents.com/details/63e641f6072ad23c39bd0a11b791ba7d919d373b</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.7 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.7/GeneralIndex.keywords.7_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.7_meta.xml contains metadata about this torrent s contents.</description>
<size>24377294848</size>
</item><item>
<title>GeneralIndex.keywords.6</title>
<category>Paper</category>
<infohash>890842b0f31fbf8a887324e89f15774db3529bc0</infohash>
<guid>https://academictorrents.com/details/890842b0f31fbf8a887324e89f15774db3529bc0</guid>
<link>https://academictorrents.com/details/890842b0f31fbf8a887324e89f15774db3529bc0</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.6 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.6/GeneralIndex.keywords.6_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.6_meta.xml contains metadata about this torrent s contents.</description>
<size>24381489152</size>
</item><item>
<title>GeneralIndex.keywords.5</title>
<category>Paper</category>
<infohash>7939402b673b277f19e4399526c3d3ffcecd060d</infohash>
<guid>https://academictorrents.com/details/7939402b673b277f19e4399526c3d3ffcecd060d</guid>
<link>https://academictorrents.com/details/7939402b673b277f19e4399526c3d3ffcecd060d</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.5 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.5/GeneralIndex.keywords.5_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.5_meta.xml contains metadata about this torrent s contents.</description>
<size>24402460672</size>
</item><item>
<title>GeneralIndex.keywords.4</title>
<category>Paper</category>
<infohash>a7b27e869c4b663236399a87e958a09905447650</infohash>
<guid>https://academictorrents.com/details/a7b27e869c4b663236399a87e958a09905447650</guid>
<link>https://academictorrents.com/details/a7b27e869c4b663236399a87e958a09905447650</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.4 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.4/GeneralIndex.keywords.4_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.4_meta.xml contains metadata about this torrent s contents.</description>
<size>24410849280</size>
</item><item>
<title>GeneralIndex.keywords.3</title>
<category>Paper</category>
<infohash>07366824f3624830c307e484e33b1b7c2330429a</infohash>
<guid>https://academictorrents.com/details/07366824f3624830c307e484e33b1b7c2330429a</guid>
<link>https://academictorrents.com/details/07366824f3624830c307e484e33b1b7c2330429a</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.3 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.3/GeneralIndex.keywords.3_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.3_meta.xml contains metadata about this torrent s contents.</description>
<size>24419237888</size>
</item><item>
<title>GeneralIndex.keywords.2</title>
<category>Paper</category>
<infohash>b802ff9e68b239b4937eb56d6d716b6c21e9e702</infohash>
<guid>https://academictorrents.com/details/b802ff9e68b239b4937eb56d6d716b6c21e9e702</guid>
<link>https://academictorrents.com/details/b802ff9e68b239b4937eb56d6d716b6c21e9e702</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.2 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.2/GeneralIndex.keywords.2_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.2_meta.xml contains metadata about this torrent s contents.</description>
<size>24473763840</size>
</item><item>
<title>GeneralIndex.keywords.1</title>
<category>Dataset</category>
<infohash>e3b80835397dd4101ee766c940285d1058dfa2ed</infohash>
<guid>https://academictorrents.com/details/e3b80835397dd4101ee766c940285d1058dfa2ed</guid>
<link>https://academictorrents.com/details/e3b80835397dd4101ee766c940285d1058dfa2ed</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.1 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.1/GeneralIndex.keywords.1_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.1_meta.xml contains metadata about this torrent s contents.</description>
<size>24440209408</size>
</item><item>
<title>GeneralIndex.keywords.0</title>
<category>Dataset</category>
<infohash>a072f8bfe5ca7c250292d88b7f739d711e8349e0</infohash>
<guid>https://academictorrents.com/details/a072f8bfe5ca7c250292d88b7f739d711e8349e0</guid>
<link>https://academictorrents.com/details/a072f8bfe5ca7c250292d88b7f739d711e8349e0</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/GeneralIndex.keywords.0 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/GeneralIndex.keywords.0/GeneralIndex.keywords.0_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file GeneralIndex.keywords.0_meta.xml contains metadata about this torrent s contents.</description>
<size>23932698624</size>
</item><item>
<title>LAION-400-MILLION OPEN DATASET</title>
<category>Dataset</category>
<infohash>34b94abbcefef5a240358b9acd7920c8b675aacc</infohash>
<guid>https://academictorrents.com/details/34b94abbcefef5a240358b9acd7920c8b675aacc</guid>
<link>https://academictorrents.com/details/34b94abbcefef5a240358b9acd7920c8b675aacc</link>
<description>LAION-400M The world’s largest openly available image-text-pair dataset with 400 million samples. # Concept and Content The LAION-400M dataset is completely openly, freely accessible. All images and texts in the LAION-400M dataset have been filtered with OpenAI‘s CLIP by calculating the cosine similarity between the text and image embeddings and dropping those with a similarity below 0.3 The threshold of 0.3 had been determined through human evaluations and seems to be a good heuristic for estimating semantic image-text-content matching. The image-text-pairs have been extracted from the Common Crawl web data dump and are from random web pages crawled between 2014 and 2021. # Download Information You can find The CLIP image embeddings (NumPy files) The parquet files KNN index of image embeddings # LAION-400M Dataset Statistics The LAION-400M and future even bigger ones are in fact datasets of datasets. For instance, it can be filtered out by image sizes into smaller datasets like this:     Number of unique samples 413M</description>
<size>1211103363514</size>
</item><item>
<title>Wallstreetbets submissions/comments</title>
<category>Dataset</category>
<infohash>098cbcf9712a8747b89f7e235dae41431fd57f7e</infohash>
<guid>https://academictorrents.com/details/098cbcf9712a8747b89f7e235dae41431fd57f7e</guid>
<link>https://academictorrents.com/details/098cbcf9712a8747b89f7e235dae41431fd57f7e</link>
<description>All submissions and comments in r/wallstreetbets from the creation of the subreddit through June 2021. Extracted from the pushshift dump files: https://academictorrents.com/details/90e7a746b1c24e45af0940b37cffcec7c96c8096 An example python script for iterating over the lines in these dumps is here: https://github.com/Watchful1/PushshiftDumps/blob/master/scripts/single_file.py If you are interested in a similar file for another subreddit, feel free to DM u/Watchful1 on reddit</description>
<size>4381482023</size>
</item><item>
<title>Reddit comments/submissions 2005-06 to 2021-06</title>
<category>Dataset</category>
<infohash>90e7a746b1c24e45af0940b37cffcec7c96c8096</infohash>
<guid>https://academictorrents.com/details/90e7a746b1c24e45af0940b37cffcec7c96c8096</guid>
<link>https://academictorrents.com/details/90e7a746b1c24e45af0940b37cffcec7c96c8096</link>
<description>Reddit comments and submissions from 2005-06 to 2021-06 collected by pushshift which can be found here https://files.pushshift.io/reddit/ These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps</description>
<size>1368249629581</size>
</item><item>
<title>census</title>
<category>Dataset</category>
<infohash>f8395a375912f1b666368d197ce10f0115ebfefd</infohash>
<guid>https://academictorrents.com/details/f8395a375912f1b666368d197ce10f0115ebfefd</guid>
<link>https://academictorrents.com/details/f8395a375912f1b666368d197ce10f0115ebfefd</link>
<description>Only for Orâ¦.a</description>
<size>17285552787</size>
</item><item>
<title>Wikipedia Training Data for Megatron-LM</title>
<category>Dataset</category>
<infohash>b6215a898a2a08b6061d23f2e4e1094121fb7082</infohash>
<guid>https://academictorrents.com/details/b6215a898a2a08b6061d23f2e4e1094121fb7082</guid>
<link>https://academictorrents.com/details/b6215a898a2a08b6061d23f2e4e1094121fb7082</link>
<description>A preprocessed dataset for https://github.com/NVIDIA/Megatron-LM training. Please see instructions in https://github.com/Lyken17/ML-Datasets for how to use it. Note: the author does not own any copyrights of the data.</description>
<size>7840268306</size>
</item><item>
<title>Beyond Tripeptides - Associated Data</title>
<category>Dataset</category>
<infohash>117b3e94cd99cffa8511acf8252c32aa46f8e5d1</infohash>
<guid>https://academictorrents.com/details/117b3e94cd99cffa8511acf8252c32aa46f8e5d1</guid>
<link>https://academictorrents.com/details/117b3e94cd99cffa8511acf8252c32aa46f8e5d1</link>
<description>Self-assembling peptide nanostructures have been shown to be of great importance in nature and have presented many promising applications, for example, in medicine as drug-delivery vehicles, biosensors, and antivirals. Being very promising candidates for the growing field of bottom-up manufacture of functional nanomaterials, previous work (Frederix, et al. 2011 and 2015) has screened all possible amino acid combinations for di- and tripeptides in search of such materials. However, the enormous complexity and variety of linear combinations of the 20 amino acids make exhaustive simulation of all combinations of tetrapeptides and above infeasible. Therefore, we have developed an active machine-learning method (also known as “iterative learning” and “evolutionary search method”) which leverages a lower-resolution data set encompassing the whole search space and a just-in-time high-resolution data set which further analyzes those target peptides selected by the lower-resolution model. This model uses newly generated data upon each iteration to improve both lower- and higher-resolution models in the search for ideal candidates. Curation of the lower-resolution data set is explored as a method to control the selected candidates, based on criteria such as log P. A major aim of this method is to produce the best results in the least computationally demanding way. This model has been developed to be broadly applicable to other search spaces with minor changes to the algorithm, allowing its use in other areas of research.</description>
<size>5003577418</size>
</item><item>
<title>ImageNet21K (Winter 2021 Release)</title>
<category>Dataset</category>
<infohash>8ec0d8df0fbb507594557bce993920442f4f6477</infohash>
<guid>https://academictorrents.com/details/8ec0d8df0fbb507594557bce993920442f4f6477</guid>
<link>https://academictorrents.com/details/8ec0d8df0fbb507594557bce993920442f4f6477</link>
<description>ImageNet is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.</description>
<size>1185381173159</size>
</item><item>
<title>NPTEL2020 - Indian English Speech Dataset</title>
<category>Dataset</category>
<infohash>cc9dc56afd3055c7e0f021ec4f1824021558926c</infohash>
<guid>https://academictorrents.com/details/cc9dc56afd3055c7e0f021ec4f1824021558926c</guid>
<link>https://academictorrents.com/details/cc9dc56afd3055c7e0f021ec4f1824021558926c</link>
<description>An opus version of the dataset listed here - https://github.com/AI4Bharat/NPTEL2020-Indian-English-Speech-Dataset</description>
<size>118529505642</size>
</item><item>
<title>CheXpert Model Weights for TorchXRayVision</title>
<category>Dataset</category>
<infohash>5c7ee21e6770308f2d2b4bd829e896dbd9d3ee87</infohash>
<guid>https://academictorrents.com/details/5c7ee21e6770308f2d2b4bd829e896dbd9d3ee87</guid>
<link>https://academictorrents.com/details/5c7ee21e6770308f2d2b4bd829e896dbd9d3ee87</link>
<description>Irvin, J., et al (2019). CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. AAAI Conference on Artificial Intelligence. http://arxiv.org/abs/1901.07031</description>
<size>6817570825</size>
</item><item>
<title>03FYZ 2020-2021 Techiche di Programmazione (ITA)</title>
<category>Course</category>
<infohash>493d1ce5c9425b7caebb913619d653908b79ab10</infohash>
<guid>https://academictorrents.com/details/493d1ce5c9425b7caebb913619d653908b79ab10</guid>
<link>https://academictorrents.com/details/493d1ce5c9425b7caebb913619d653908b79ab10</link>
<description>Video-Lezioni per il corso di Tecniche di Programmazione, tenutosi al Politecnico di Torino nell Anno Accademico 2020/2021. Docenti del corso: Fulvio Corno, Alberto Monge Roffarello, Tatiana Tommasi Informazioni sul corso: - pagina ufficiale del corso: http://bit.ly/tecn-progr - materiale didattico: https://github.com/TdP-2021/materiale - esercizi e laboratori: https://github.com/TdP-2021 - temi d esame: https://github.com/TdP-esami Queste video-lezioni sono disponibili anche come playlist su YouTube:</description>
<size>12522843175</size>
</item><item>
<title>01TXY 2020-2021 Web Applications I</title>
<category>Course</category>
<infohash>445f19abe993fd2d7e1aac70c3f6e6f3bfd93189</infohash>
<guid>https://academictorrents.com/details/445f19abe993fd2d7e1aac70c3f6e6f3bfd93189</guid>
<link>https://academictorrents.com/details/445f19abe993fd2d7e1aac70c3f6e6f3bfd93189</link>
<description>Video Lectures of the course "Web Applications I" taken at Politecnico di Torino (Italy), in year 2020/2021.</description>
<size>8586753229</size>
</item><item>
<title>EPIC-KITCHENS-100</title>
<category>Dataset</category>
<infohash>c92b4a3cd3834e9af9666ac82379ff15ca289a83</infohash>
<guid>https://academictorrents.com/details/c92b4a3cd3834e9af9666ac82379ff15ca289a83</guid>
<link>https://academictorrents.com/details/c92b4a3cd3834e9af9666ac82379ff15ca289a83</link>
<description>EPIC-KITCHENS-100: Extended Footage for EPIC-KITCHENS dataset, to 100 hours of footage. 10.5523/bris.2g1n6qdydwa9u22shpxqzp0t8m 2020-09-10 N.b. please also see ERRATUM published at https://github.com/epic-kitchens/epic-kitchens-100-annotations/blob/master/README.md#erratum 2021-05-25 This supersedes the original torrent, which had a small change made.</description>
<size>795905919690</size>
</item><item>
<title>Duke MTMC Dataset</title>
<category>Dataset</category>
<infohash>00099d85f6d8e8134b47b301b64349f469303990</infohash>
<guid>https://academictorrents.com/details/00099d85f6d8e8134b47b301b64349f469303990</guid>
<link>https://academictorrents.com/details/00099d85f6d8e8134b47b301b64349f469303990</link>
<description>The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images. PUBLISHED 2016 IMAGES 2,000,000 IDENTITIES 2,700 PURPOSE Person re-identification, multi-camera tracking</description>
<size>161889105</size>
</item><item>
<title>Market-1501 Dataset</title>
<category>Dataset</category>
<infohash>3ea1f8ae1d3155addff586a96006d122587663ee</infohash>
<guid>https://academictorrents.com/details/3ea1f8ae1d3155addff586a96006d122587663ee</guid>
<link>https://academictorrents.com/details/3ea1f8ae1d3155addff586a96006d122587663ee</link>
<description>The Market-1501 dataset is collected in front of a supermarket in Tsinghua University. A total of six cameras are used, including 5 high-resolution cameras, and one low-resolution camera. Overlap exists among different cameras. Overall, this dataset contains 32,668 annotated bounding boxes of 1,501 identities. In this open system, images of each identity are captured by at most six cameras. We make sure that each annotated identity is present in at least two cameras, so that cross-camera search can be performed. The Market-1501 dataset has three featured properties: First, our dataset employes Deformable Part Model (DPM) as pedestrian detector. Second, in addition to the true positive bounding boxes, we also provde false alarm detection results. Third, each identify may have multiple images under each camera. During cross-camera search, there are multiple queries and multiple ground truths for each identity. The Market-1501 dataset is annotated using the following rules. For each detected bounding box to be annotated, we manually draw a ground truth bounding box that contains the pedestrian. Then, for the detected and hand-drawn bounding boxes, we calculate the ratio of the overlapping area to the union area. If the ratio is larger than 50%, the DPM bounding box is marked as "good"; if the ratio is smaller than 20%, the bounding boxe is marked as "distractor"; otherwise, it is marked as "junk", meaning that this image is of zero influence to the re-identification accuracy. 1) "bounding_box_test". There are 19,732 images in this folder used for testing. 2) "bounding_box_train". There are 12,936 images in this folder used for training. 3) "query". There are 750 identities. We randomly select one query image for each camera. So the maximum number of query images is 6 for an identity. In total, there are 3,368 query images in this folder. 4) "gt_query". This folder contains the ground truth annotations. For each query, the relevant images are marked as "good" or "junk". "junk" has zero impact on search accuracy. "junk" images also include those in the same camera with the query. 5) "gt_bbox". We also provide the hand-drawn bounding boxes. They are used to judge whether a DPM bounding box is good. Dataset statistics: - identities: 1501 (+1 for background). - images: 12936 (train) + 3368 (query) + 15913 (gallery).</description>
<size>152733771</size>
</item><item>
<title>Open Academic Graph 2019</title>
<category>Dataset</category>
<infohash>4398ab05f1f39d12942f0d5e9ddbdd21beea87a7</infohash>
<guid>https://academictorrents.com/details/4398ab05f1f39d12942f0d5e9ddbdd21beea87a7</guid>
<link>https://academictorrents.com/details/4398ab05f1f39d12942f0d5e9ddbdd21beea87a7</link>
<description>A copy of the "Open Academic Graph v2" (OAGv2) corpus published by aminer.org and Microsoft Academic Graph in early 2019. Contains roughly 90 GB (compressed) of bibliographic metadata for hundreds of millions of publications. Related publications include: Jie Tang, Jing Zhang, Limin Yao, Juanzi Li, Li Zhang, and Zhong Su. ArnetMiner: Extraction and Mining of Academic Social Networks. In Proceedings of the Fourteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD 2008). pp.990-998. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15 Companion). ACM, New York, NY, USA, 243-246.</description>
<size>96053755904</size>
</item><item>
<title>ORCID Public Data File (2019)</title>
<category>Dataset</category>
<infohash>b6e2d2486e297a159d53724f41451821ff961166</infohash>
<guid>https://academictorrents.com/details/b6e2d2486e297a159d53724f41451821ff961166</guid>
<link>https://academictorrents.com/details/b6e2d2486e297a159d53724f41451821ff961166</link>
<description>These files contain a snapshot of all public data in the ORCID Registry associated with an ORCID record that was created or claimed by an individual as of October 1st, 2019. ORCID publishes this file once per year under a Creative Commons CC0 1.0 Universal public domain dedication. This means that, to the extent possible under law, ORCID has waived all copyright and related or neighbouring rights to the Public Data File. For more information on the file, see https://orcid.org/content/orcid-public-data-file-use-policy The file contains the public information associated with each user s ORCID record. The data is available in XML format and is further divided into separate files for easier management. One file contains the full record summary for each record. The rest of the data is divided into 11 files which contain the activities for each record including full work data. Below is more complete description of how the data is structured. Summaries file Name: ORCID_2019_summaries.tar.gz Description: Contains all the existing summaries, when extracted, it will generate the following file structure: summaries/[3 digits checksum]/[iD].xml Example: If you are looking for the summary of iD  0000-0002-7869-831X , decompress the file and you will find the summary under  summaries/31X/0000-0002-7869-831X.xml . Activities files Named: - ORCID_2019_activites_0.tar.gz - ORCID_2019_activites_1.tar.gz - ORCID_2019_activites_2.tar.gz - ORCID_2019_activites_3.tar.gz - ORCID_2019_activites_4.tar.gz - ORCID_2019_activites_5.tar.gz - ORCID_2019_activites_6.tar.gz - ORCID_2019_activites_7.tar.gz - ORCID_2019_activites_8.tar.gz - ORCID_2019_activites_9.tar.gz - ORCID_2019_activites_X.tar.gz Description: Consists of 11 .tar.gz files, each file contains the public activities that belongs to an iD that contains a given checksum. The file hierarchy is as follows: [checksum]/[3 digits checksum]/[iD]/[activity type]/[iD]_[activity_type]_[putcode].xml Examples: If you are looking for the public activities that belong to  0000-0002-7869-831X: Decompress the file  ORCID_2019_activites_X.tar.gz . You will find all the public activities under  X/31X/0000-0002-7869-831X/  which are then sub-divided in folders for each activity type. If you are looking for all the employments that belong to  0000-0002-7869-831X : Decompress the file  ORCID_2019_activites_X.tar.gz , Navigate to  X/31X/0000-0002-7869-831X/employments . If you are looking for the employment with put-code  7923980  that belongs to  0000-0002-7869-831X  : Decompress the file  ORCID_2019_activites_X.tar.gz . You will find that employment under  X/31X/0000-0002-7869-831X/employments/0000-0002-7869-831X_employments_7923980.xml .</description>
<size>77863059456</size>
</item><item>
<title>ORCID Public Data File (2020)</title>
<category>Dataset</category>
<infohash>ca1cf38367fb8324d195429980fbf6a977d0f1ed</infohash>
<guid>https://academictorrents.com/details/ca1cf38367fb8324d195429980fbf6a977d0f1ed</guid>
<link>https://academictorrents.com/details/ca1cf38367fb8324d195429980fbf6a977d0f1ed</link>
<description>These files contain a snapshot of all public data in the ORCID Registry associated with an ORCID record that was created or claimed by an individual as of October 1st, 2020. ORCID publishes this file once per year under a Creative Commons CC0 1.0 Universal public domain dedication. This means that, to the extent possible under law, ORCID has waived all copyright and related or neighbouring rights to the Public Data File. For more information on the file, see https://orcid.org/content/orcid-public-data-file-use-policy The file contains the public information associated with each user s ORCID record. The data is available in XML format and is further divided into separate files for easier management. One file contains the full record summary for each record. The rest of the data is divided into 11 files which contain the activities for each record including full work data. Below is more complete description of how the data is structured. Summaries file Name: ORCID_2020_10_summaries.tar.gz Description: Contains all the existing summaries, when extracted, it will generate the following file structure: summaries/[3 digits checksum]/[iD].xml Example: If you are looking for the summary of iD  0000-0002-7869-831X , decompress the file and you will find the summary under  summaries/31X/0000-0002-7869-831X.xml . Activities files Named: - ORCID_2020_10_activites_0.tar.gz - ORCID_2020_10_activites_1.tar.gz - ORCID_2020_10_activites_2.tar.gz - ORCID_2020_10_activites_3.tar.gz - ORCID_2020_10_activites_4.tar.gz - ORCID_2020_10_activites_5.tar.gz - ORCID_2020_10_activites_6.tar.gz - ORCID_2020_10_activites_7.tar.gz - ORCID_2020_10_activites_8.tar.gz - ORCID_2020_10_activites_9.tar.gz - ORCID_2020_10_activites_X.tar.gz Description: Consists of 11 .tar.gz files, each file contains the public activities that belongs to an iD that contains a given checksum. The file hierarchy is as follows: [checksum]/[3 digits checksum]/[iD]/[activity type]/[iD]_[activity_type]_[putcode].xml Examples: If you are looking for the public activities that belong to  0000-0002-7869-831X: Decompress the file  ORCID_2020_10_activites_X.tar.gz . You will find all the public activities under  X/31X/0000-0002-7869-831X/  which are then sub-divided in folders for each activity type. If you are looking for all the employments that belong to  0000-0002-7869-831X : Decompress the file  ORCID_2020_10_activites_X.tar.gz , Navigate to  X/31X/0000-0002-7869-831X/employments . If you are looking for the employment with put-code  7923980  that belongs to  0000-0002-7869-831X  : Decompress the file  ORCID_2020_10_activites_X.tar.gz . You will find that employment under  X/31X/0000-0002-7869-831X/employments/0000-0002-7869-831X_employments_7923980.xml .</description>
<size>90596966400</size>
</item><item>
<title>ImageNet-21K-P dataset (processed from fall11_whole.tar)</title>
<category>Dataset</category>
<infohash>84461687ecb08ce9d0f24b70d0528e4ae5d6966e</infohash>
<guid>https://academictorrents.com/details/84461687ecb08ce9d0f24b70d0528e4ae5d6966e</guid>
<link>https://academictorrents.com/details/84461687ecb08ce9d0f24b70d0528e4ae5d6966e</link>
<description>ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which contains more pictures and classes, is used less frequently for pretraining, mainly due to its complexity, and underestimation of its added value compared to standard ImageNet-1K pretraining. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. Via a dedicated preprocessing stage, utilizing WordNet hierarchies, and a novel training scheme called semantic softmax, we show that different models, including small mobile-oriented models, significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT. Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset.</description>
<size>279013071677</size>
</item><item>
<title>Coursera - Python 3 Programming Specialization</title>
<category>Course</category>
<infohash>83918ea4bb488cefd3d8b8b8762597d32aebb4fa</infohash>
<guid>https://academictorrents.com/details/83918ea4bb488cefd3d8b8b8762597d32aebb4fa</guid>
<link>https://academictorrents.com/details/83918ea4bb488cefd3d8b8b8762597d32aebb4fa</link>
<description/>
<size>2101534921</size>
</item><item>
<title>Coursera - Data Science Fundamentals with Python and SQL</title>
<category>Course</category>
<infohash>bdc0bb1499b1992a5488b4bbcfc9288c30793c08</infohash>
<guid>https://academictorrents.com/details/bdc0bb1499b1992a5488b4bbcfc9288c30793c08</guid>
<link>https://academictorrents.com/details/bdc0bb1499b1992a5488b4bbcfc9288c30793c08</link>
<description/>
<size>807986536</size>
</item><item>
<title>Coursera - Applied Data Science with Python</title>
<category>Course</category>
<infohash>c9ef88cfe0137f6a4292823f0765a5d4b93ff313</infohash>
<guid>https://academictorrents.com/details/c9ef88cfe0137f6a4292823f0765a5d4b93ff313</guid>
<link>https://academictorrents.com/details/c9ef88cfe0137f6a4292823f0765a5d4b93ff313</link>
<description/>
<size>2114984419</size>
</item><item>
<title>COCO 2017 Resized to 256x256</title>
<category>Dataset</category>
<infohash>eea5a532dd69de7ff93d5d9c579eac55a41cb700</infohash>
<guid>https://academictorrents.com/details/eea5a532dd69de7ff93d5d9c579eac55a41cb700</guid>
<link>https://academictorrents.com/details/eea5a532dd69de7ff93d5d9c579eac55a41cb700</link>
<description>COCO: Common Objects in Context Resized to 256x245</description>
<size>1643933412</size>
</item><item>
<title>80,000 Curiosity Rover Images with Metadata (2012-2021)</title>
<category>Dataset</category>
<infohash>9d00cae359b90a2647c6b0971325219dad8d05c7</infohash>
<guid>https://academictorrents.com/details/9d00cae359b90a2647c6b0971325219dad8d05c7</guid>
<link>https://academictorrents.com/details/9d00cae359b90a2647c6b0971325219dad8d05c7</link>
<description>Images collected by reddit user /u/RedBlaze4: "I have downloaded all of the pictures the rover Curiosity has took from 2012 to 23/02/2021. In the torrent there is also a big json file with all the info for each pictures. There is 80k pictures for a total of 32gb." "On the raw image site there are a lot of subframes, cropped versions of the full pictures so I downloaded only the full ones"</description>
<size>33576880275</size>
</item><item>
<title>350,000 Raw Images from Spirit and Opportunity Mars Rovers (with Metadata)</title>
<category>Dataset</category>
<infohash>0e717963ca5000efc7ab6c6f97b2ff4c066d5f6f</infohash>
<guid>https://academictorrents.com/details/0e717963ca5000efc7ab6c6f97b2ff4c066d5f6f</guid>
<link>https://academictorrents.com/details/0e717963ca5000efc7ab6c6f97b2ff4c066d5f6f</link>
<description>Metadata info: https://mars.nasa.gov/mer/gallery/edr_filename_key.html (https://archive.is/anp4x) Collected by reddit user /u/Bagline: "Scraped from Feb-19-2021 through Feb-24-2021 35GB 355460 images (1535 less than expected, I can only account for about half of the missing images, but it s too much work to go through again) Uploaded as zip because it s 100x faster to transfer"</description>
<size>37967941188</size>
</item><item>
<title>Ukrainian Open Speech To Text Dataset 4.2 ~1200 hours</title>
<category>Dataset</category>
<infohash>fcf8bb60c59e9eb583df003d54ed61776650beb8</infohash>
<guid>https://academictorrents.com/details/fcf8bb60c59e9eb583df003d54ed61776650beb8</guid>
<link>https://academictorrents.com/details/fcf8bb60c59e9eb583df003d54ed61776650beb8</link>
<description>Speech Recognition for Ukrainian 🇺🇦 The aim of this repository is to collect information and datasets for speech recognition in Ukrainian. Get in touch with us in our Telegram group: https://t.me/speech_recognition_uk Datasets</description>
<size>188307794768</size>
</item><item>
<title>Vggface2: A dataset for recognising faces across pose and age</title>
<category>Dataset</category>
<infohash>535113b8395832f09121bc53ac85d7bc8ef6fa5b</infohash>
<guid>https://academictorrents.com/details/535113b8395832f09121bc53ac85d7bc8ef6fa5b</guid>
<link>https://academictorrents.com/details/535113b8395832f09121bc53ac85d7bc8ef6fa5b</link>
<description>In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians). The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimise the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity. To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VGGFace2, on MS-Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the IJB-A and IJB-B face recognition benchmarks, exceeding the previous state-of-the-art by a large margin. The dataset and models are publicly available. Please make sure to pay attention to the License information for using the dataset for Commercial/Research purposes (Terms of Use) available on http://www.robots.ox.ac.uk/~vgg/data/vgg_face2/.</description>
<size>40249987403</size>
</item><item>
<title>Flickr Faces HQ (FFHQ) 70K from StyleGAN</title>
<category>Dataset</category>
<infohash>1c1e60f484e911b564de6b4d8b643e19154d5809</infohash>
<guid>https://academictorrents.com/details/1c1e60f484e911b564de6b4d8b643e19154d5809</guid>
<link>https://academictorrents.com/details/1c1e60f484e911b564de6b4d8b643e19154d5809</link>
<description>Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN). The dataset consists of 70,000 high-quality PNG images at 1024x1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from Flickr, thus inheriting all the biases of that website, and automatically aligned and cropped using dlib. Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally Amazon Mechanical Turk was used to remove the occasional statues, paintings, or photos of photos.</description>
<size>391800361205</size>
</item><item>
<title>Breast Ultrasound Images Dataset (Dataset BUSI)</title>
<category>Dataset</category>
<infohash>d0b7b7ae40610bbeaea385aeb51658f527c86a16</infohash>
<guid>https://academictorrents.com/details/d0b7b7ae40610bbeaea385aeb51658f527c86a16</guid>
<link>https://academictorrents.com/details/d0b7b7ae40610bbeaea385aeb51658f527c86a16</link>
<description>The data collected at baseline include breast ultrasound images among women in ages between 25 and 75 years old. This data was collected in 2018. The number of patients is 600 female patients. The dataset consists of 780 images with an average image size of 500*500 pixels. The images are in PNG format. The ground truth images are presented with original images. The images are categorized into three classes, which are normal, benign, and malignant. If you use this dataset, please cite: Al-Dhabyani W, Gomaa M, Khaled H, Fahmy A. Dataset of breast ultrasound images. Data in Brief. 2020 Feb;28:104863. DOI: 10.1016/j.dib.2019.104863. | Subject area               | Medicine and Dentistry                                                                                                                                                             | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | More specific subject area | Radiology and Imaging                                                                                                                                                              | | Type of data               | Images and mask images                                                                                                                                                             | | How data was acquired      | LOGIQ E9 ultrasound and LOGIQ E9 Agile ultrasound system                                                                                                                           | | Data format                | PNG                                                                                                                                                                                | | Experimental factors       | All images are classified as normal, benign and malignant                                                                                                                          | | Experimental features      | When medical images are used for training deep learning models, they provide fast and accurate results in classification, detection, and segmentation of breast cancer.            | | Data source location       | Baheya Hospital for Early Detection &amp; Treatment of Women s Cancer, Cairo, Egypt.                                                                                                   |</description>
<size>205873341</size>
</item><item>
<title>The Pile An 800GB Dataset of Diverse Text for Language Modeling</title>
<category>Dataset</category>
<infohash>0d366035664fdf51cfbe9f733953ba325776e667</infohash>
<guid>https://academictorrents.com/details/0d366035664fdf51cfbe9f733953ba325776e667</guid>
<link>https://academictorrents.com/details/0d366035664fdf51cfbe9f733953ba325776e667</link>
<description>## What is the Pile? The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. ## Why is the Pile a good training set? Recent work has shown that especially for large models, diversity in data sources improves general cross-domain knowledge of the model, as well as downstream generalization capability. In our evaluations, not only do models trained on the Pile show moderate improvements in traditional language modeling benchmarks, they also show significant improvements on Pile BPB. ## Why is the Pile a good benchmark? To score well on Pile BPB (bits per byte), a model must be able to understand many disparate domains including books, github repositories, webpages, chat logs, and medical, physics, math, computer science, and philosophy papers. Pile BPB is a measure of world knowledge and reasoning ability in these domains, making it a robust benchmark of general, cross-domain text modeling ability for large language models.</description>
<size>772891257239</size>
</item><item>
<title>Ukrainian Open Speech To Text Dataset ~1000 hours</title>
<category>Dataset</category>
<infohash>50f7a8e6157a9c2e38919afee0a11d8145e35556</infohash>
<guid>https://academictorrents.com/details/50f7a8e6157a9c2e38919afee0a11d8145e35556</guid>
<link>https://academictorrents.com/details/50f7a8e6157a9c2e38919afee0a11d8145e35556</link>
<description>Speech Recognition for Ukrainian 🇺🇦 The aim of this repository is to collect information and datasets for speech recognition in Ukrainian. Get in touch with us in our Telegram group: https://t.me/speech_recognition_uk Datasets</description>
<size>121769117729</size>
</item><item>
<title>NASA Astronomy Picture of the Day Archive (7800 images, 2011)</title>
<category>Dataset</category>
<infohash>5f755e078ee9195b8ae0b3336710e6ce92ef3251</infohash>
<guid>https://academictorrents.com/details/5f755e078ee9195b8ae0b3336710e6ce92ef3251</guid>
<link>https://academictorrents.com/details/5f755e078ee9195b8ae0b3336710e6ce92ef3251</link>
<description>Archive of over 7800 images from apod.nasa.gov, originally organized in 2011</description>
<size>2816870400</size>
</item><item>
<title>10 years of Dukascopy Forex Tick Data (2008-2019)</title>
<category>Dataset</category>
<infohash>8baee145786f4311b66bea5d13ef30eedce04a24</infohash>
<guid>https://academictorrents.com/details/8baee145786f4311b66bea5d13ef30eedce04a24</guid>
<link>https://academictorrents.com/details/8baee145786f4311b66bea5d13ef30eedce04a24</link>
<description>Data collected and formatted by Justin Timperio: "In my exploration of world of big data and I became curious about tick data. Tick data is extremely granular and provides a great challenge for those looking to work on their optimization skills due to its size. Unfortunately, market data is almost always behind a pay wall or de-sampled to the point of uselessness. After discovering the Dukascopy api, I knew I wanted to make this data available for all in a more accessible format." Total Line Count: 8,495,770,706 Total Data Points: 33,983,082,824 Total Decompressed Size: 501 GB</description>
<size>65032104495</size>
</item><item>
<title>115 paintings from the Hermitage museum, high-resolution, JPEG</title>
<category>Dataset</category>
<infohash>0ef42919a5688ea60f7174ccf899a91774508b48</infohash>
<guid>https://academictorrents.com/details/0ef42919a5688ea60f7174ccf899a91774508b48</guid>
<link>https://academictorrents.com/details/0ef42919a5688ea60f7174ccf899a91774508b48</link>
<description>115 paintings from the Hermitage museum, high-resolution, JPEG All images are public domain.</description>
<size>2233151877</size>
</item><item>
<title>Space Exploration Archive: Modern and Historic Images, Videos, and Documents</title>
<category>Dataset</category>
<infohash>c10dcee89567c78d3fa80cc9411bb705fa242416</infohash>
<guid>https://academictorrents.com/details/c10dcee89567c78d3fa80cc9411bb705fa242416</guid>
<link>https://academictorrents.com/details/c10dcee89567c78d3fa80cc9411bb705fa242416</link>
<description>Contains archived images, documents, and videos from several governmental and private space agencies</description>
<size>11370805873</size>
</item><item>
<title>Research Publications on SARS-CoV-2 (COVID-19): A Study of Publication Trends using the R Package</title>
<category>Paper</category>
<infohash>c40a76aa036f733909962923d94ee261c5065afe</infohash>
<guid>https://academictorrents.com/details/c40a76aa036f733909962923d94ee261c5065afe</guid>
<link>https://academictorrents.com/details/c40a76aa036f733909962923d94ee261c5065afe</link>
<description>This study provides a bibliometric review of 1,027 documents published on COVID-19, extracted from the database of‘dimensions.ai’ and published in 228 journals, authored by 3,436authors. For the analysis, Bibliometrix R-Package was used through the biblioshiny interface. A topical query was conducted and 2,973 bibliographic literature from the online database of“Dimensions.ai” was downloaded using the search strategy “Text – ‘Coronavirus OR COVID-19 OR SARS-CoV-2’ in the title and abstract; Field of Research is Division code 08 Information and "Computing Science”. Documents from the earliest possible record to the current record of Bibliometric study revealed a sudden rise in the annual scientific production in the year 2019 with the advent of the COVID-19 pandemic. Further study revealed the most prolific authors, journals, and affiliations. Besides, we present research collaboration networks at the author level.</description>
<size>1224546</size>
</item><item>
<title>January 2021 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>e4287cb7619999709f6e9db5c359dda17e93d515</infohash>
<guid>https://academictorrents.com/details/e4287cb7619999709f6e9db5c359dda17e93d515</guid>
<link>https://academictorrents.com/details/e4287cb7619999709f6e9db5c359dda17e93d515</link>
<description>A data file of the public elements from Crossref’s 120+ million metadata records (in JSON format). Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through 7th January 2021 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org If you do use our API, we encourage you to read the section of the documentation on "etiquette" at https://github.com/CrossRef/rest-api-doc#etiquette. That is, how to use the API without making it impossible for others to use.</description>
<size>102509014730</size>
</item><item>
<title>14BHD 2020-2021 Informatica (ITA)</title>
<category>Course</category>
<infohash>2eab1a9c70d636b14c4f2118f3818875e75660a8</infohash>
<guid>https://academictorrents.com/details/2eab1a9c70d636b14c4f2118f3818875e75660a8</guid>
<link>https://academictorrents.com/details/2eab1a9c70d636b14c4f2118f3818875e75660a8</link>
<description>Video-Lezioni per il corso di Informatica, per il primo anno dei corsi di Ingegneria tenutosi al Politecnico di Torino nell Anno Accademico 2020/2021. Docenti del corso: Fulvio Corno, Juan Pablo Saenz Moreno, Luisa Fernanda Barrera Leon Informazioni sul corso: - pagina ufficiale del corso: http://bit.ly/polito-informatica Queste video-lezioni sono disponibili anche come playlist su YouTube:</description>
<size>10146461903</size>
</item><item>
<title>Caltech CS124 Operating Systems</title>
<category>Course</category>
<infohash>8835f0ca640ca9405f32c7d00c140cc16fcfa078</infohash>
<guid>https://academictorrents.com/details/8835f0ca640ca9405f32c7d00c140cc16fcfa078</guid>
<link>https://academictorrents.com/details/8835f0ca640ca9405f32c7d00c140cc16fcfa078</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/caltech-cs124-operating-systems.</description>
<size>808020401</size>
</item><item>
<title>biological_sex_classifier_pretrained</title>
<category>Dataset</category>
<infohash>05f279603d1c26aed9d3b1031ab65f0582e27d52</infohash>
<guid>https://academictorrents.com/details/05f279603d1c26aed9d3b1031ab65f0582e27d52</guid>
<link>https://academictorrents.com/details/05f279603d1c26aed9d3b1031ab65f0582e27d52</link>
<description/>
<size>692581313</size>
</item><item>
<title>80 Million Tiny Images</title>
<category>Dataset</category>
<infohash>bb3e9011d5839145c63757bf3188cd782162b1b5</infohash>
<guid>https://academictorrents.com/details/bb3e9011d5839145c63757bf3188cd782162b1b5</guid>
<link>https://academictorrents.com/details/bb3e9011d5839145c63757bf3188cd782162b1b5</link>
<description>80 Million Tiny Images stacked in groups of 64 -&gt; there are ~12 Million .jpg files each containing 64 32x32 images in one column (32x(64*32)). This content hosted at the Internet Archive at https://archive.org/details/tiny-images Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/tiny-images/tiny-images_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file tiny-images_meta.xml contains metadata about this torrent s contents.</description>
<size>24452792320</size>
</item><item>
<title>thermodynamics.docx (Case study on Entropy variations)</title>
<category>Paper</category>
<infohash>632090223863ebe433f0e598b4819638e4d00470</infohash>
<guid>https://academictorrents.com/details/632090223863ebe433f0e598b4819638e4d00470</guid>
<link>https://academictorrents.com/details/632090223863ebe433f0e598b4819638e4d00470</link>
<description>In this paper,extensive thought assumptions  are reflected out on violation of thermodynamic laws where exothermic reaction are mainly dealt in which heat is evolved or liberated into atmosphere. Thermodynamics mostly deals with heat and energy interactions in various phases and transformations. As heat is Energy-in-transit it cannot be liberated out without certain means of vector directions and eventually cannot be stored. Heat only manifest during a change of state of a system where temperature differs during phase change. In contradiction to this some final assumptions and opinions are framed as conclusion which addresses “violation of two laws(1st &amp; 2nd ) of thermodynamic for stating of HEAT LIBERATION IN Exothermic Reactions.</description>
<size>68554</size>
</item><item>
<title>Kinetics700 dataset</title>
<category>Dataset</category>
<infohash>49f203189fb69ae96fb40a6d0e129949e1dfec98</infohash>
<guid>https://academictorrents.com/details/49f203189fb69ae96fb40a6d0e129949e1dfec98</guid>
<link>https://academictorrents.com/details/49f203189fb69ae96fb40a6d0e129949e1dfec98</link>
<description>https://arxiv.org/abs/1907.06987 SHA1 checksum: kinetics700_v1.zip: 5f0020ff374692845fa96c979cd095544af3f34b kinetics700_v1_validation.zip: d6a1131732904ef5047997902f8bf5408d388914</description>
<size>365475824921</size>
</item><item>
<title>youcook2_features_howto100m</title>
<category>Dataset</category>
<infohash>70417e3793dbbb03ca68981307860254766d5a1d</infohash>
<guid>https://academictorrents.com/details/70417e3793dbbb03ca68981307860254766d5a1d</guid>
<link>https://academictorrents.com/details/70417e3793dbbb03ca68981307860254766d5a1d</link>
<description>YouCook2 Video Features HowTo100M.</description>
<size>662214346</size>
</item><item>
<title>youcook2_features_2d3d</title>
<category>Dataset</category>
<infohash>3ae97c261ed32d3bd5326d3bf6991c9e2ea3dc17</infohash>
<guid>https://academictorrents.com/details/3ae97c261ed32d3bd5326d3bf6991c9e2ea3dc17</guid>
<link>https://academictorrents.com/details/3ae97c261ed32d3bd5326d3bf6991c9e2ea3dc17</link>
<description>YouCook2 Video Features 2D3D.</description>
<size>13868347166</size>
</item><item>
<title>activitynet-densecaptions_features_icep-v3</title>
<category>Dataset</category>
<infohash>0c824440c94cc18ace1cb2c77423919b728d703e</infohash>
<guid>https://academictorrents.com/details/0c824440c94cc18ace1cb2c77423919b728d703e</guid>
<link>https://academictorrents.com/details/0c824440c94cc18ace1cb2c77423919b728d703e</link>
<description>ActivityNet-DenseCaptions Video Features ICEP-V3.</description>
<size>55194724353</size>
</item><item>
<title>Kinetics400 Dataset: The Kinetics Human Action Video Dataset</title>
<category>Dataset</category>
<infohash>184d11318372f70018cf9a72ef867e2fb9ce1d26</infohash>
<guid>https://academictorrents.com/details/184d11318372f70018cf9a72ef867e2fb9ce1d26</guid>
<link>https://academictorrents.com/details/184d11318372f70018cf9a72ef867e2fb9ce1d26</link>
<description>https://arxiv.org/pdf/1705.06950.pdf MD5 checksum: kinetics400.zip: 33224b5b77c634aa6717da686efce2d4 kinetics400_validation.zip: 013358d458477d7ac10cebb9e84df354</description>
<size>157312153084</size>
</item><item>
<title>MillionSongSubset</title>
<category>Dataset</category>
<infohash>464d60f3c8d93a36cd53a41689becd255ca39fd0</infohash>
<guid>https://academictorrents.com/details/464d60f3c8d93a36cd53a41689becd255ca39fd0</guid>
<link>https://academictorrents.com/details/464d60f3c8d93a36cd53a41689becd255ca39fd0</link>
<description>Million Song Dataset</description>
<size>2738454294</size>
</item><item>
<title>TB Portal Tuberculosis Chest X-ray dataset for Belarus</title>
<category>Dataset</category>
<infohash>509f986b456b6fce04c15f9d1de22cd4ccb2c4b7</infohash>
<guid>https://academictorrents.com/details/509f986b456b6fce04c15f9d1de22cd4ccb2c4b7</guid>
<link>https://academictorrents.com/details/509f986b456b6fce04c15f9d1de22cd4ccb2c4b7</link>
<description>This is a tuberculosis Chest X-ray dataset containing patients who are resistant to conventional tuberculosis treatment. Data is provided in raw format as available in https://tbportals.niaid.nih.gov. The dataset mainly comes from the population of Belarus - in total over 1000 tuberculosis cases are provided. Credits to: TB Portals Program, Office of Cyber Infrastructure and Computational Biology (OCICB), National Institute of Allergy and Infectious Diseases (NIAID).</description>
<size>12398871154</size>
</item><item>
<title>s3dis.tar.gz</title>
<category>Paper</category>
<infohash>3b8700004986f3c2e9c1aa12837fd8da93d4f54f</infohash>
<guid>https://academictorrents.com/details/3b8700004986f3c2e9c1aa12837fd8da93d4f54f</guid>
<link>https://academictorrents.com/details/3b8700004986f3c2e9c1aa12837fd8da93d4f54f</link>
<description>S3DIS benchmark</description>
<size>4637713148</size>
</item><item>
<title>UT Zappos50K (Version 2.1)</title>
<category>Dataset</category>
<infohash>3b3cb58f4ccafc6320d06d00f0862a4ba923b510</infohash>
<guid>https://academictorrents.com/details/3b3cb58f4ccafc6320d06d00f0862a4ba923b510</guid>
<link>https://academictorrents.com/details/3b3cb58f4ccafc6320d06d00f0862a4ba923b510</link>
<description>UT Zappos50K (UT-Zap50K) is a large shoe dataset consisting of 50,025 catalog images collected from Zappos.com. The images are divided into 4 major categories — shoes, sandals, slippers, and boots — followed by functional types and individual brands. The shoes are centered on a white background and pictured in the same orientation for convenient analysis. This dataset is created in the context of an online shopping task, where users pay special attentions to fine-grained visual differences. For instance, it is more likely that a shopper is deciding between two pairs of similar men s running shoes instead of between a woman s high heel and a man s slipper. GIST and LAB color features are provided. In addition, each image has 8 associated meta-data (gender, materials, etc.) labels that are used to filter the shoes on Zappos.com. https://i.imgur.com/RoVL6qr.jpg # Citation This dataset is for academic, non-commercial use only. If you use this dataset in a publication, please cite the following papers: A. Yu and K. Grauman. "Fine-Grained Visual Comparisons with Local Learning". In CVPR, 2014. [paper] [supp] [poster] [bibtex] [project page]</description>
<size>887031043</size>
</item><item>
<title>Garments Dataset</title>
<category>Dataset</category>
<infohash>2121eef25c261d7a005f130c335492aa56ff3dc7</infohash>
<guid>https://academictorrents.com/details/2121eef25c261d7a005f130c335492aa56ff3dc7</guid>
<link>https://academictorrents.com/details/2121eef25c261d7a005f130c335492aa56ff3dc7</link>
<description>A large trained image datasets of garments</description>
<size>24771215740</size>
</item><item>
<title>papers-past</title>
<category>Dataset</category>
<infohash>7115cca454dae7ea3ac9e90e25407efb8fc0667f</infohash>
<guid>https://academictorrents.com/details/7115cca454dae7ea3ac9e90e25407efb8fc0667f</guid>
<link>https://academictorrents.com/details/7115cca454dae7ea3ac9e90e25407efb8fc0667f</link>
<description>National Library of New Zealand "Papers Past" open data pilot. 19th C newspaper text, OCR in METS/ALTO XML. https://natlib.govt.nz/about-us/open-data/papers-past-metadata/papers-past-newspaper-open-data-pilot/dataset-papers-past-newspaper-open-data-pilot</description>
<size>251117461591</size>
</item><item>
<title>dbg_elf_bins.tar.lzma</title>
<category>Dataset</category>
<infohash>5563b93fc02f8ec41338485ba756c6da5ff580b3</infohash>
<guid>https://academictorrents.com/details/5563b93fc02f8ec41338485ba756c6da5ff580b3</guid>
<link>https://academictorrents.com/details/5563b93fc02f8ec41338485ba756c6da5ff580b3</link>
<description/>
<size>184851471812</size>
</item><item>
<title>PanNuke: An Open Pan-Cancer Histology Dataset for Nuclei Instance Segmentation and Classification</title>
<category>Dataset</category>
<infohash>99f2c7b57b95500711e33f2ee4d14c9fd7c7366c</infohash>
<guid>https://academictorrents.com/details/99f2c7b57b95500711e33f2ee4d14c9fd7c7366c</guid>
<link>https://academictorrents.com/details/99f2c7b57b95500711e33f2ee4d14c9fd7c7366c</link>
<description>https://i.imgur.com/iYlXSCm.png Semi automatically generated nuclei instance segmentation and classification dataset with exhaustive nuclei labels across 19 different tissue types. The dataset consists of 481 visual fields, of which 312 are randomly sampled from more than 20K whole slide images at different magnifications, from multiple data sources. In total the dataset contains 205,343 labeled nuclei, each with an instance segmentation mask. Models trained on pannuke can aid in whole slide image tissue type segmentation, and generalise to new tissues. PanNuke demonstrates one of the first succesfully semi-automatically generated datasets. ## citation</description>
<size>2077087715</size>
</item><item>
<title>Deep Learning for Computer Vision - Justin Johnson</title>
<category>Course</category>
<infohash>b0be621d1089525c26fd7325fe77fee2294cc1ab</infohash>
<guid>https://academictorrents.com/details/b0be621d1089525c26fd7325fe77fee2294cc1ab</guid>
<link>https://academictorrents.com/details/b0be621d1089525c26fd7325fe77fee2294cc1ab</link>
<description>Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification and object detection. Recent developments in neural network approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of neural-network based deep learning methods for computer vision. During this course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. We will cover learning algorithms, neural network architectures, and practical engineering tricks for training and fine-tuning networks for visual recognition tasks. https://i.imgur.com/ar9RQJx.jpg</description>
<size>3816708490</size>
</item><item>
<title>Material embodiments of electroacoustic music: an experimental workshop study</title>
<category>Paper</category>
<infohash>55de7bc3e45b86836d353bda934706d20976843f</infohash>
<guid>https://academictorrents.com/details/55de7bc3e45b86836d353bda934706d20976843f</guid>
<link>https://academictorrents.com/details/55de7bc3e45b86836d353bda934706d20976843f</link>
<description/>
<size>2162511</size>
</item><item>
<title>Whale Shark ID Dataset</title>
<category>Dataset</category>
<infohash>bb47cd1d6dde2f49b040495382c778c102409080</infohash>
<guid>https://academictorrents.com/details/bb47cd1d6dde2f49b040495382c778c102409080</guid>
<link>https://academictorrents.com/details/bb47cd1d6dde2f49b040495382c778c102409080</link>
<description>Our released whale shark (Rhincodon typus) data set represents a collaborative effort based on the data collection and population modeling efforts conducted at Ningaloo Marine Park in Western Australia from 1995-2008 (Holmberg et al. 2008, 2009). Photos (7888) and metadata from 2441 whale shark encounters were collected from 464 individual contributors, especially from the original research of Brad Norman and from members of the local whale shark tourism industry who sight these animals annually from April-June. Images were annotated with bounding boxes around each visible whale shark and viewpoints labeled (e.g., left, right, etc.). A total of 543 individual whale sharks were identified by their unique spot patterning using first computer-assisted spot pattern recognition (Arzoumanian et al. 2005) and then manual review and confirmation.  A total of 7,693 named sightings were exported. The dataset is released in the Microsoft COCO format (https://cocodataset.org/) and therefore uses flat image folders with associated YAML metadata files. We have collapsed the entire dataset into a single "train" label and have left "val" and "test" empty; we do this as an invitation to researchers to experiment with their own novel approaches for dealing with the unbalanced and chaotic distribution on the number of sightings per individual.  All of the images in the dataset have been resized to have a maximum linear dimension of 3,000 pixels.  The metadata for all animal sightings is defined by an axis-aligned bounding box via and includes information on the rotation of the box (theta), the viewpoint of the animal, a species (category) ID, a source image ID, an individual string ID name, and other miscellaneous values.  The temporal ordering of the images, and an anonymized ID for the original photographer, can be determined from the metadata for each image. For research or press contact, please direct all correspondence to Wild Me at info@wildme.org.  Wild Me (https://www.wildme.org) is a registered 501(c)(3) not-for-profit based in Portland, Oregon, USA and brings state-of-the-art computer vision tools to ecology researchers working around the globe on wildlife conservation. Direct download mirror: https://wildbookiarepository.azureedge.net/datasets/whaleshark.coco.tar.gz</description>
<size>6466072650</size>
</item><item>
<title>Great Zebra and Giraffe Count ID Dataset</title>
<category>Dataset</category>
<infohash>69160c6bf11275321017f18124dbaff2d381b21c</infohash>
<guid>https://academictorrents.com/details/69160c6bf11275321017f18124dbaff2d381b21c</guid>
<link>https://academictorrents.com/details/69160c6bf11275321017f18124dbaff2d381b21c</link>
<description>Our dataset for plains zebra (Equus quagga) is taken from a two-day census of the Nairobi National Park, located just south of the capital’s airport in Nairobi, Kenya.  The “Great Zebra and Giraffe Count” (GZGC) photographic census was organized on February 28th and March 1st 2015 and had the participation of 27 different teams of citizen scientists, 55 total photographers, and collected 9,406 images of plains zebra and Masai giraffe (Giraffa tippelskirchi) (Parham et al. 2017).  Only images containing either zebras or giraffes were included in the exported dataset, a total of 4,948 images, where the original biographical information of the original contributors are removed.  All images are labeled with bounding boxes around the individual animals for which there is ID metadata, meaning some images contain missing boxes and are not intended to be used for object detection training or testing.  Viewpoints for all animal annotations were also added.  All ID assignments were completed using the HotSpotter algorithm (Crall et al. 2013) by visually matching the stripes and spots as seen on the body of the animal.  A total of 2,056 combined names are released for 6,286 individual zebra and 639 giraffe sightings.  This dataset presents as a challenging comparison compared to the whale shark dataset since it contains a significantly higher number of animals that are only seen once during the survey. The dataset is released in the Microsoft COCO format (https://cocodataset.org/) and therefore uses flat image folders with associated YAML metadata files. We have collapsed the entire dataset into a single "train" label and have left "val" and "test" empty; we do this as an invitation to researchers to experiment with their own novel approaches for dealing with the unbalanced and chaotic distribution on the number of sightings per individual.  All of the images in the dataset have been resized to have a maximum linear dimension of 3,000 pixels.  The metadata for all animal sightings is defined by an axis-aligned bounding box via and includes information on the rotation of the box (theta), the viewpoint of the animal, a species (category) ID, a source image ID, an individual string ID name, and other miscellaneous values.  The temporal ordering of the images, and an anonymized ID for the original photographer, can be determined from the metadata for each image. For research or press contact, please direct all correspondence to Wild Me at info@wildme.org.  Wild Me (https://www.wildme.org) is a registered 501(c)(3) not-for-profit based in Portland, Oregon, USA and brings state-of-the-art computer vision tools to ecology researchers working around the globe on wildlife conservation. Direct download mirror: https://wildbookiarepository.azureedge.net/datasets/gzgc.coco.tar.gz</description>
<size>10433199738</size>
</item><item>
<title>2614 Images from Huble Space Telescope</title>
<category>Dataset</category>
<infohash>ac13536aeac7799b1a70ae76d34b551c6cc79f2d</infohash>
<guid>https://academictorrents.com/details/ac13536aeac7799b1a70ae76d34b551c6cc79f2d</guid>
<link>https://academictorrents.com/details/ac13536aeac7799b1a70ae76d34b551c6cc79f2d</link>
<description>After first successful torrent with 100 images from Hubble Space Telescope I decided to gather much more images, so this time you can find in this zip file 2614 images from Hubble Space Telescope. Majority of them are in .tif format, some in .jpg. Folder after unpacking weights 77 GB. All images are Original Size and Type: Observation. If you are scrolling through this database and you are wondering what you can see on particular image you can use image name to check it in the search engine. For example if you are interested in the image ann0912a.tif you can put "ann0912a" in search engine and open top results (should be from https://www.spacetelescope.org/) and you can read description of this image. All images are based on CC-4 license and you can read more about that here: https://www.spacetelescope.org/copyright/</description>
<size>77208213702</size>
</item><item>
<title>Medical Imaging with Deep Learning Tutorial 2020 - Joseph Paul Cohen</title>
<category>Course</category>
<infohash>e0974c84449826e34d8cc96c943cba2af18ab514</infohash>
<guid>https://academictorrents.com/details/e0974c84449826e34d8cc96c943cba2af18ab514</guid>
<link>https://academictorrents.com/details/e0974c84449826e34d8cc96c943cba2af18ab514</link>
<description>This tutorial will be styled as a graduate lecture about medical imaging with deep learning. This will cover the background of popular medical image domains (chest X-ray and histology) as well as methods to tackle multi-modality/view, segmentation, and counting tasks. These methods will be covered in terms of architecture and objective function design. Also, a discussion about incorrect feature attribution and approaches to mitigate the issue. Prerequisites: basic knowledge of computer vision (CNNs) and machine learning (regression, gradient descent). Presented by: Joseph Paul Cohen PhD Postdoctoral Fellow Mila, University of Montreal</description>
<size>76859393</size>
</item><item>
<title>Darknet Market Archives 2013-2015 (dnmarchives) </title>
<category>Dataset</category>
<infohash>1698989f23b60f91187d42b031f0ad857793888a</infohash>
<guid>https://academictorrents.com/details/1698989f23b60f91187d42b031f0ad857793888a</guid>
<link>https://academictorrents.com/details/1698989f23b60f91187d42b031f0ad857793888a</link>
<description>Dark Net Markets (DNM) are online markets typically hosted as Tor hidden services providing escrow services between buyers &amp; sellers transacting in Bitcoin or other cryptocoins, usually for drugs or other illegal/regulated goods; the most famous DNM was Silk Road 1, which pioneered the business model in 2011. From 2013–2015, I scraped/mirrored on a weekly or daily basis all existing English-language DNMs as part of my research into their usage, lifetimes/​characteristics, &amp; legal riskiness; these scrapes covered vendor pages, feedback, images, etc. In addition, I made or obtained copies of as many other datasets &amp; documents related to the DNMs as I could. This uniquely comprehensive collection is now publicly released as a 50GB (~1.6TB uncompressed) collection covering 89 DNMs &amp; 37+ related forums, representing &lt;4,438 mirrors, and is available for any research. This page documents the download, contents, interpretation, and technical methods behind the scrapes. There are ~89 markets, &gt;37 forums and ~5 other sites, representing &lt;4,438 mirrors of &gt;43,596,420 files in ~49.4GB of 163 compressed files, unpacking to &gt;1548GB; the largest single archive decompresses to &lt;250GB. (It can be burned to 3 25GB BDs or 2 50GB BDs; if the former, it may be worth generating additional FEC.) These archives are xz-compressed tarballs (optimized with the sort-key trick); typically each subfolder is a single date-stamped (YYYY-MM-DD) crawl using wget, with the default directory/file layout. The majority of the content is HTML, CSS, and images (typically photos of item listings); images are space-intensive &amp; omitted from many crawls, but I feel that images are useful to allow browsing the markets as they were and may be highly valuable in their own right as research material, so I tried to collect images where applicable. (Child porn is not a concern as all DNMs &amp; DNM forums ban that content.) Archives sourced from other people follow their own particular conventions. Mac &amp; Windows users may be able to uncompress using their built-in OS archiver, 7zip, Stuffit, or WinRAR; the PAR2 error-checking can be done using par2, QuickPar, Par Buddy, MultiPar or others depending on one’s OS. If you don’t want to uncompress all of a particular archive, as they can be large, you can try extracting specific files using archiver-specific options; for example, a SR2F command targeting a particular old forum thread:</description>
<size>51946455040</size>
</item><item>
<title>DeepPPI: Boosting Prediction of Protein–Protein Interactions with Deep Neural Networks</title>
<category>Paper</category>
<infohash>833e455ff95bf3c2e6d7f2af52529296c3f10284</infohash>
<guid>https://academictorrents.com/details/833e455ff95bf3c2e6d7f2af52529296c3f10284</guid>
<link>https://academictorrents.com/details/833e455ff95bf3c2e6d7f2af52529296c3f10284</link>
<description>The complex language of eukaryotic gene expression remains incompletely understood. Despite the importance suggested by many proteins variants statistically associated with human disease, nearly all such variants have unknown mechanisms, for example, protein-protein interactions (PPIs). In this study, we address this challenge using a recent machine learning advance-deep neural networks (DNNs). We aim at improving the performance of PPIs prediction and propose a method called DeepPPI (Deep neural networks for Protein-Protein Interactions prediction), which employs deep neural networks to learn effectively the representations of proteins from common protein descriptors. The experimental results indicate that DeepPPI achieves superior performance on the test data set with an Accuracy of 92.50%, Precision of 94.38%, Recall of 90.56%, Specificity of 94.49%, Matthews Correlation Coefficient of 85.08% and Area Under the Curve of 97.43%, respectively. Extensive experiments show that DeepPPI can learn useful features of proteins pairs by a layer-wise abstraction, and thus achieves better prediction performance than existing methods. The source code of our approach can be available via http://ailab.ahu.edu.cn:8087/DeepPPI/index.html .</description>
<size>309208</size>
</item><item>
<title>DRIVE: Digital Retinal Images for Vessel Extraction</title>
<category>Dataset</category>
<infohash>062dc18f55b086c76c718ac88f98972789b3c04c</infohash>
<guid>https://academictorrents.com/details/062dc18f55b086c76c718ac88f98972789b3c04c</guid>
<link>https://academictorrents.com/details/062dc18f55b086c76c718ac88f98972789b3c04c</link>
<description>The DRIVE database has been established to enable comparative studies on segmentation of blood vessels in retinal images. Retinal vessel segmentation and delineation of morphological attributes of retinal blood vessels, such as length, width, tortuosity, branching patterns and angles are utilized for the diagnosis, screening, treatment, and evaluation of various cardiovascular and ophthalmologic diseases such as diabetes, hypertension, arteriosclerosis and chorodial neovascularization. Automatic detection and analysis of the vasculature can assist in the implementation of screening programs for diabetic retinopathy, can aid research on the relationship between vessel tortuosity and hypertensive retinopathy, vessel diameter measurement in relation with diagnosis of hypertension, and computer-assisted laser surgery. Automatic generation of retinal maps and extraction of branch points have been used for temporal or multimodal image registration and retinal image mosaic synthesis. Moreover, the retinal vascular tree is found to be unique for each individual and can be used for biometric identification. ## Data The photographs for the DRIVE database were obtained from a diabetic retinopathy screening program in The Netherlands. The screening population consisted of 400 diabetic subjects between 25-90 years of age. Forty photographs have been randomly selected, 33 do not show any sign of diabetic retinopathy and 7 show signs of mild early diabetic retinopathy. Here is a brief description of the abnormalities in these 7 cases: 25_training: pigment epithelium changes, probably butterfly maculopathy with pigmented scar in fovea, or choroidiopathy, no diabetic retinopathy or other vascular abnormalities. 26_training: background diabetic retinopathy, pigmentary epithelial atrophy, atrophy around optic disk 32_training: background diabetic retinopathy 03_test: background diabetic retinopathy 08_test: pigment epithelium changes, pigmented scar in fovea, or choroidiopathy, no diabetic retinopathy or other vascular abnormalities 14_test: background diabetic retinopathy 17_test: background diabetic retinopathy Each image has been JPEG compressed. The images were acquired using a Canon CR5 non-mydriatic 3CCD camera with a 45 degree field of view (FOV). Each image was captured using 8 bits per color plane at 768 by 584 pixels. The FOV of each image is circular with a diameter of approximately 540 pixels. For this database, the images have been cropped around the FOV. For each image, a mask image is provided that delineates the FOV. The set of 40 images has been divided into a training and a test set, both containing 20 images. For the training images, a single manual segmentation of the vasculature is available. For the test cases, two manual segmentations are available; one is used as gold standard, the other one can be used to compare computer generated segmentations with those of an independent human observer. Furthermore, a mask image is available for every retinal image, indicating the region of interest. All human observers that manually segmented the vasculature were instructed and trained by an experienced ophthalmologist. They were asked to mark all pixels for which they were for at least 70% certain that they were vessel. https://i.imgur.com/AkjZ5pz.png</description>
<size>29343870</size>
</item><item>
<title>Object-CXR - Automatic detection of foreign objects on chest X-rays</title>
<category>Dataset</category>
<infohash>fdc91f11d7010f7259a05403fc9d00079a09f5d5</infohash>
<guid>https://academictorrents.com/details/fdc91f11d7010f7259a05403fc9d00079a09f5d5</guid>
<link>https://academictorrents.com/details/fdc91f11d7010f7259a05403fc9d00079a09f5d5</link>
<description>## Data 5000 frontal chest X-ray images with foreign objects presented and 5000 frontal chest X-ray images without foreign objects were filmed and collected from about 300 township hosiptials in China. 12 medically-trained radiologists with 1 to 3 years of experience annotated all the images. Each annotator manually annotates the potential foreign objects on a given chest X-ray presented within the lung field. Foreign objects were annotated with bounding boxes, bounding ellipses or masks depending on the shape of the objects. Support devices were excluded from annotation. A typical frontal chest X-ray with foreign objects annotated looks like this: https://i.imgur.com/SFUZy80.jpg ## Annotation Object-level annotations for each image, which indicate the rough location of each foreign object using a closed shape. Annotations are provided in csv files and a csv example is shown below.    csv image_path,annotation /path/#####.jpg,ANNO_TYPE_IDX x1 y1 x2 y2;ANNO_TYPE_IDX x1 y1 x2 y2 ... xn yn;... /path/#####.jpg, /path/#####.jpg,ANNO_TYPE_IDX x1 y1 x2 y2 ...     Three type of shapes are used namely rectangle, ellipse and polygon. We use  0 ,  1  and  2  as  ANNO_TYPE_IDX  respectively. - For rectangle and ellipse annotations, we provide the bounding box (upper left and lower right) coordinates in the format  x1 y1 x2 y2  where  x1  &lt;  x2  and  y1  &lt;  y2 . - For polygon annotations, we provide a sequence of coordinates in the format  x1 y1 x2 y2 ... xn yn . &gt; ### Note: &gt; Our annotations use a Cartesian pixel coordinate system, with the origin (0,0) in the upper left corner. The x coordinate extends from left to right; the y coordinate extends downward. ## Organizers [JF Healthcare](http://www.jfhealthcare.com/) is the primary organizer of this challenge.</description>
<size>13636253487</size>
</item><item>
<title>Sci-Hub SQL Database (2020-05-30)</title>
<category>Dataset</category>
<infohash>4b13244559282f9650a382f70506dc4c516215e2</infohash>
<guid>https://academictorrents.com/details/4b13244559282f9650a382f70506dc4c516215e2</guid>
<link>https://academictorrents.com/details/4b13244559282f9650a382f70506dc4c516215e2</link>
<description>Sci-Hub is a website that provides free access to scientific research articles and books by bypassing publisher paywalls. Sci-Hub does not make an article database publicly available. Instead, Library Genesis indexes files provided by Sci-Hub into a separate database (internally named "scimag"). The Library Genesis "scimag" database indexed 82,513,235 articles as of 2020-07-07. The database consists only of DOI and article metadata and does not contain the articles themselves. Each row of the database represents an entry for a full-text scientific article, uniquely identified by its DOI. Timestamped 2020-05-30 04:54 Current as of 2020-07-07 File hashes MD5:1CC808AD4ACC430A4B3A40892252793A SHA-1:5369999C6A534A693D41D32A7D5869505C292155 Related research Cabanac, G. (2016). Bibliogifts in LibGen? A study of a text-sharing platform driven by biblioleaks and crowdsourcing. Journal of the Association for Information Science and Technology, 67(4), 874–884. https://doi.org/10.1002/asi.23445 Greshake B. Looking into Pandora s Box: The Content of Sci-Hub and its Usage. F1000Research 2017, 6:541. https://doi.org/10.12688/f1000research.11366.1 Himmelstein, D. S., Romero, A. R., Levernier, J. G., Munro, T. A., McLaughlin, S. R., Tzovaras, B. G., &amp; Greene, C. S. (2018). Sci-Hub provides access to nearly all scholarly literature. ELife, 7, e32822. https://doi.org/10.7554/eLife.32822</description>
<size>10352938638</size>
</item><item>
<title>SIIM-ACR Pneumothorax Segmentation</title>
<category>Dataset</category>
<infohash>6ef7c6d039e85152c4d0f31d83fa70edc4aba088</infohash>
<guid>https://academictorrents.com/details/6ef7c6d039e85152c4d0f31d83fa70edc4aba088</guid>
<link>https://academictorrents.com/details/6ef7c6d039e85152c4d0f31d83fa70edc4aba088</link>
<description>In this competition, you’ll develop a model to classify (and if present, segment) pneumothorax from a set of chest radiographic images. If successful, you could aid in the early recognition of pneumothoraces and save lives. What am I predicting? We are attempting to a) predict the existence of pneumothorax in our test images and b) indicate the location and extent of the condition using masks. Your model should create binary masks and encode them using RLE. https://i.imgur.com/xJYwEv4.png</description>
<size>2072340626</size>
</item><item>
<title>Tiny Images Dataset</title>
<category>Dataset</category>
<infohash>325fc900c2c7bb7a0cfcfd45851a65c2f5b5391d</infohash>
<guid>https://academictorrents.com/details/325fc900c2c7bb7a0cfcfd45851a65c2f5b5391d</guid>
<link>https://academictorrents.com/details/325fc900c2c7bb7a0cfcfd45851a65c2f5b5391d</link>
<description>With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors.</description>
<size>426335124834</size>
</item><item>
<title>distriploy (revision: v0.15)</title>
<category>Dataset</category>
<infohash>54790fb55f295bce3415722efa7eca9724ee4322</infohash>
<guid>https://academictorrents.com/details/54790fb55f295bce3415722efa7eca9724ee4322</guid>
<link>https://academictorrents.com/details/54790fb55f295bce3415722efa7eca9724ee4322</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11883</size>
</item><item>
<title>distriploy (revision: v0.15-1-g96e1305)</title>
<category>Dataset</category>
<infohash>ef687bf6a847de963e3d900c4489783ad53db557</infohash>
<guid>https://academictorrents.com/details/ef687bf6a847de963e3d900c4489783ad53db557</guid>
<link>https://academictorrents.com/details/ef687bf6a847de963e3d900c4489783ad53db557</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11883</size>
</item><item>
<title>distriploy (revision: v0.14-1-g5bda869)</title>
<category>Dataset</category>
<infohash>53d3f1a237581ce2f8b6ccd3aac92ade84f1603c</infohash>
<guid>https://academictorrents.com/details/53d3f1a237581ce2f8b6ccd3aac92ade84f1603c</guid>
<link>https://academictorrents.com/details/53d3f1a237581ce2f8b6ccd3aac92ade84f1603c</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11717</size>
</item><item>
<title>distriploy (revision: v0.13)</title>
<category>Dataset</category>
<infohash>7ecfdc4ec3225ed388d6be0a74c6c415a5a4ab22</infohash>
<guid>https://academictorrents.com/details/7ecfdc4ec3225ed388d6be0a74c6c415a5a4ab22</guid>
<link>https://academictorrents.com/details/7ecfdc4ec3225ed388d6be0a74c6c415a5a4ab22</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11555</size>
</item><item>
<title>distriploy (revision: v0.12)</title>
<category>Dataset</category>
<infohash>7a53ddeb2eeb7968bf865781866eed95d11072dd</infohash>
<guid>https://academictorrents.com/details/7a53ddeb2eeb7968bf865781866eed95d11072dd</guid>
<link>https://academictorrents.com/details/7a53ddeb2eeb7968bf865781866eed95d11072dd</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11523</size>
</item><item>
<title>distriploy (revision: v0.11-8-g6e0d8b1)</title>
<category>Dataset</category>
<infohash>88c0ba02ef1937caf4d6aff43c34df7b0220054e</infohash>
<guid>https://academictorrents.com/details/88c0ba02ef1937caf4d6aff43c34df7b0220054e</guid>
<link>https://academictorrents.com/details/88c0ba02ef1937caf4d6aff43c34df7b0220054e</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11502</size>
</item><item>
<title>distriploy (revision: v0.11-7-gc2efce7)</title>
<category>Dataset</category>
<infohash>9b4d9680a5d8b9c56ca6473ee1455a84aed7975a</infohash>
<guid>https://academictorrents.com/details/9b4d9680a5d8b9c56ca6473ee1455a84aed7975a</guid>
<link>https://academictorrents.com/details/9b4d9680a5d8b9c56ca6473ee1455a84aed7975a</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11502</size>
</item><item>
<title>distriploy (revision: v0.11-6-g2b7b3f5)</title>
<category>Dataset</category>
<infohash>20e2b9cb8646e173f55b155251d79274c73f84f8</infohash>
<guid>https://academictorrents.com/details/20e2b9cb8646e173f55b155251d79274c73f84f8</guid>
<link>https://academictorrents.com/details/20e2b9cb8646e173f55b155251d79274c73f84f8</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11353</size>
</item><item>
<title>distriploy (revision: v0.11-4-g6791743)</title>
<category>Dataset</category>
<infohash>32e85e387a97e875af65122b3a76c7a1c7da2fa9</infohash>
<guid>https://academictorrents.com/details/32e85e387a97e875af65122b3a76c7a1c7da2fa9</guid>
<link>https://academictorrents.com/details/32e85e387a97e875af65122b3a76c7a1c7da2fa9</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11353</size>
</item><item>
<title>distriploy (revision: v0.11-3-gbe78305)</title>
<category>Dataset</category>
<infohash>7db2c8afbc2e7724c0afd9c62795e2aa575d0c5e</infohash>
<guid>https://academictorrents.com/details/7db2c8afbc2e7724c0afd9c62795e2aa575d0c5e</guid>
<link>https://academictorrents.com/details/7db2c8afbc2e7724c0afd9c62795e2aa575d0c5e</link>
<description>Distriploy is a small tool handling release deployment</description>
<size>11240</size>
</item><item>
<title>Leaf counting dataset</title>
<category>Dataset</category>
<infohash>a147c27ea0a9c155df9d77af832c321210cf5529</infohash>
<guid>https://academictorrents.com/details/a147c27ea0a9c155df9d77af832c321210cf5529</guid>
<link>https://academictorrents.com/details/a147c27ea0a9c155df9d77af832c321210cf5529</link>
<description>## Leaf counting dataset Dataset containing  9372 RGB images of weeds with the number of leaves counted. The images are collected in fields across Denmark using Nokia and Samsung cell phone cameras; Samsung, Nikon, Canon and Sony consumer cameras; and a Point Grey industrial camera. https://i.imgur.com/h7JFf86.jpg ## Citation If you use this dataset in your research or elsewhere, please cite/reference the following paper: PAPER: Weed Growth Stage Estimator Using Deep Convolutional Neural Networks Bibtex</description>
<size>925394199</size>
</item><item>
<title>Oxford Town Centre Dataset</title>
<category>Dataset</category>
<infohash>35e83806d9362a57be736f370c821960eb2f2a01</infohash>
<guid>https://academictorrents.com/details/35e83806d9362a57be736f370c821960eb2f2a01</guid>
<link>https://academictorrents.com/details/35e83806d9362a57be736f370c821960eb2f2a01</link>
<description>For a coarse gaze estimation system to be useful, many people must be tracked simultaneously in real-time and in the presence of frequent occlusions and other distractions such as animals or vehicles. Two tracking systems were developed, both of which were based on two important image measurements. The first measurement was the output of a head detector which was trained using Dalal &amp; Trigg s HOG detection algorithm that has become standard for the purposes of pedestrian detection. Although HOG detection is generally slow, it has become suitable for real-time use due to efficient GPU implementations. The second type of measurement comes from sparse KLT tracking. Although it has been around for a long time, KLT corner tracking still provides an impressive amount of information from very little processing time. The first tracking system to be developed was based around a Kalman filter, however this proved to be susceptable to data association errors when the HOG detector failed. The second more recent approach uses Markov-Chain Monte-Carlo Data Association (MCMCDA) with an accurate error model. MCMCDA allows ambiguities to be resolved more efficiently, but also allows the tracking system to cope with temporary occlusions. The  Town Centre  dataset was used to test tracking performance in both the CVPR 2011 and the BMVC 2009 papers. TownCentreXVID.avi (342MB) - The video file</description>
<size>89733562</size>
</item><item>
<title>The WILDTRACK Seven-Camera HD Dataset</title>
<category>Dataset</category>
<infohash>5931991ad96a83cca85c0604061e766abefdf94b</infohash>
<guid>https://academictorrents.com/details/5931991ad96a83cca85c0604061e766abefdf94b</guid>
<link>https://academictorrents.com/details/5931991ad96a83cca85c0604061e766abefdf94b</link>
<description>The challenging and realistic setup of the ‘WILDTRACK‘ dataset brings multi-camera detection and tracking methods into the wild. It meets the need of the deep learning methods for a large-scale multi-camera dataset of walking pedestrians, where the cameras’ fields of view in large part overlap. Being acquired by current high tech hardware it provides HD resolution data. Further, its high precision joint calibration and synchronization shall allow for development of new algorithms that go beyond what is possible with currently available data-sets. The data acquisition took place in front of the main building of ETH Zurich, Switzerland, during nice weather conditions. The sequences are of resolution 1920×1080 pixels, shot at 60 frames per second. https://i.imgur.com/Hzamclh.jpg ## Description of available files Synchronized frames extracted with a frame rate of 10 fps, 1920×1080 resolution, and which are post-processed to remove the distortion; Calibration files which use the Pinhole camera model, compatible with the projection functions provided in the OpenCV library. Both the extrinsic and the intrinsic calibrations are available; The ground-truth annotations in a ‘json’ file format (please see separate section bellow); For ease in usage for methods focusing on classification, we also provide a file we refer to as ‘positions’ file in ‘json’ file format. For details please refer to the section bellow. Please check for an update of this site, which shell extend the download list with: Zip file can be unpacked using the Mac unzip command. ## Full videos; Corresponding points annotations which may be used for camera calibration algorithms; A second part of this dataset which albeit not being annotated, can be used for unsupervised methods. ## Positions file The ‘positions file’ allows for omitting the work with calibration files and focusing for instance on classification, while making use of the fact that the cameras are static. It consists of information about where exactly a given set of particular volumes of space project to in all of the views. The height of each volume space corresponds to the one of an average person’s height. We discretize the ground surface as a regular grid. The 3D space occupied if a person is standing at a particular position is modelled by a cylinder positioned centrally on the grid point. Each cylinder projects into each of the separate 2D views as a rectangle whose position in the view is given in pixel coordinates. Using a 480×1440 grid – totalling into 691200 positions – and the provided camera calibration files, we yield such file which is available for download. Each position is assigned an ID using 0-based enumeration ([0, 691199]). The views’ ordering numbers in this file also follow such enumeration, i.e. they range between 0 and 6 inclusively. The positions which are not visible in a given view are assigned coordinates of -1. ## Annotations</description>
<size>62125056694</size>
</item><item>
<title>MPII Human Pose Dataset</title>
<category>Dataset</category>
<infohash>6be335f0d038fd4ed4422dd318705e0843059718</infohash>
<guid>https://academictorrents.com/details/6be335f0d038fd4ed4422dd318705e0843059718</guid>
<link>https://academictorrents.com/details/6be335f0d038fd4ed4422dd318705e0843059718</link>
<description>MPII Human Pose dataset is a state of the art benchmark for evaluation of articulated human pose estimation. The dataset includes around 25K images containing over 40K people with annotated body joints. The images were systematically collected using an established taxonomy of every day human activities. Overall the dataset covers 410 human activities and each image is provided with an activity label. Each image was extracted from a YouTube video and provided with preceding and following un-annotated frames. In addition, for the test set we obtained richer annotations including body part occlusions and 3D torso and head orientations. Following the best practices for the performance evaluation benchmarks in the literature we withhold the test annotations to prevent overfitting and tuning on the test set. We are working on an automatic evaluation server and performance analysis tools based on rich test set annotations. Citing the dataset</description>
<size>12101283689</size>
</item><item>
<title>[Coursera] What A Plant Knows (Daniel Chamovitz, Tel Aviv University)</title>
<category>Course</category>
<infohash>81ff5fc1df7c1fb9300e9712368dfc479427004d</infohash>
<guid>https://academictorrents.com/details/81ff5fc1df7c1fb9300e9712368dfc479427004d</guid>
<link>https://academictorrents.com/details/81ff5fc1df7c1fb9300e9712368dfc479427004d</link>
<description>For centuries we have collectively marveled at plant diversity and form—from Charles Darwin’s early fascination with stems and flowers to Seymour Krelborn’s distorted doting in Little Shop of Horrors. This course intends to present an intriguing and scientifically valid look at how plants themselves experience the world—from the colors they see to the sensations they feel. Highlighting the latest research in genetics and more, we will delve into the inner lives of plants and draw parallels with the human senses to reveal that we have much more in common with sunflowers and oak trees than we may realize. We’ll learn how plants know up from down, how they know when a neighbor has been infested by a group of hungry beetles, and whether they appreciate the music you’ve been playing for them or if they’re just deaf to the sounds around them. We’ll explore definitions of memory and consciousness as they relate to plants in asking whether we can say that plants might even be aware of their surroundings. This highly interdisciplinary course meshes historical studies with cutting edge modern research and will be relevant to all humans who seek their place in nature. This class has three main goals: 1. To introduce you to basic plant biology by exploring plant senses (sight, smell, hearing, touch, taste, balance). 2. To introduce you to biological research and the scientific method. 3. To get the student to question life in general and what defines us as humans. Once you ve taken this course, if you are interested in a more in-depth study of plants, check out my follow-up course, Fundamentals of Plant Biology (https://www.coursera.org/learn/plant-biology/home/welcome). In order to receive academic credit for this course you must successfully pass the academic exam on campus. For information on how to register for the academic exam – https://tauonline.tau.ac.il/registration Additionally, you can apply to certain degrees using the grades you received on the courses. Read more on this here – https://go.tau.ac.il/b.a/mooc-acceptance Teachers interested in teaching this course in their class rooms are invited to explore our Academic High school program here – https://tauonline.tau.ac.il/online-highschool https://i.imgur.com/yvcoRwi.png</description>
<size>538937350</size>
</item><item>
<title>[Coursera] Natural Language Processing (Dan Jurafsky and Chris Manning)</title>
<category>Course</category>
<infohash>9ad3c282ff6c4137ed8b073d884ea3d72c2e4cd1</infohash>
<guid>https://academictorrents.com/details/9ad3c282ff6c4137ed8b073d884ea3d72c2e4cd1</guid>
<link>https://academictorrents.com/details/9ad3c282ff6c4137ed8b073d884ea3d72c2e4cd1</link>
<description/>
<size>1176932543</size>
</item><item>
<title>01TXY 2019-2020 Web Applications I</title>
<category>Course</category>
<infohash>ffdcaa1b2bca554612545c0394318d60e6b3796b</infohash>
<guid>https://academictorrents.com/details/ffdcaa1b2bca554612545c0394318d60e6b3796b</guid>
<link>https://academictorrents.com/details/ffdcaa1b2bca554612545c0394318d60e6b3796b</link>
<description>Video Lectures of the course "Web Applications I" taken at Politecnico di Torino (Italy), in year 2019/2020.</description>
<size>10017671556</size>
</item><item>
<title>03FYZ 2019-2020 Techiche di Programmazione (ITA)</title>
<category>Course</category>
<infohash>be289c8dceaa7ef16eb2969431c7e8330285c7f4</infohash>
<guid>https://academictorrents.com/details/be289c8dceaa7ef16eb2969431c7e8330285c7f4</guid>
<link>https://academictorrents.com/details/be289c8dceaa7ef16eb2969431c7e8330285c7f4</link>
<description>Video-Lezioni per il corso di Tecniche di Programmazione, tenutosi al Politecnico di Torino nell Anno Accademico 2019/2020. Docenti del corso: Fulvio Corno, Alberto Monge Roffarello, Tatiana Tommasi Informazioni sul corso: - pagina ufficiale del corso: http://bit.ly/tecn-progr - materiale didattico: https://github.com/TdP-2020/materiale - esercizi e laboratori: https://github.com/TdP-2020 - temi d esame: https://github.com/TdP-esami Queste video-lezioni sono disponibili anche come playlist su YouTube:</description>
<size>14250286689</size>
</item><item>
<title>CommunismTrentPattersonnew</title>
<category>Paper</category>
<infohash>4d96ce569e8825ef7dff271a1c189886b5540f35</infohash>
<guid>https://academictorrents.com/details/4d96ce569e8825ef7dff271a1c189886b5540f35</guid>
<link>https://academictorrents.com/details/4d96ce569e8825ef7dff271a1c189886b5540f35</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/CommunismTrentPattersonnew Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/CommunismTrentPattersonnew/CommunismTrentPattersonnew_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file CommunismTrentPattersonnew_meta.xml contains metadata about this torrent s contents.</description>
<size>1087416</size>
</item><item>
<title>DepressedTrentPatterson</title>
<category>Paper</category>
<infohash>587c7da13d4a35db6f5ab4e3378b8a7f9c3f3f74</infohash>
<guid>https://academictorrents.com/details/587c7da13d4a35db6f5ab4e3378b8a7f9c3f3f74</guid>
<link>https://academictorrents.com/details/587c7da13d4a35db6f5ab4e3378b8a7f9c3f3f74</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/DepressedTrentPatterson Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/DepressedTrentPatterson/DepressedTrentPatterson_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file DepressedTrentPatterson_meta.xml contains metadata about this torrent s contents.</description>
<size>112119</size>
</item><item>
<title>AgrumentiveEssayStowers</title>
<category>Paper</category>
<infohash>fb729fde9fec11777e629855ea3d689b0584d73a</infohash>
<guid>https://academictorrents.com/details/fb729fde9fec11777e629855ea3d689b0584d73a</guid>
<link>https://academictorrents.com/details/fb729fde9fec11777e629855ea3d689b0584d73a</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/AgrumentiveEssayStowers Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/AgrumentiveEssayStowers/AgrumentiveEssayStowers_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file AgrumentiveEssayStowers_meta.xml contains metadata about this torrent s contents.</description>
<size>1940688</size>
</item><item>
<title>ProcessAnalysisEssayGrowingPotatoes</title>
<category>Paper</category>
<infohash>73a9a0349da6984215ca7add9871b546a304e1c1</infohash>
<guid>https://academictorrents.com/details/73a9a0349da6984215ca7add9871b546a304e1c1</guid>
<link>https://academictorrents.com/details/73a9a0349da6984215ca7add9871b546a304e1c1</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/ProcessAnalysisEssayGrowingPotatoes Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/ProcessAnalysisEssayGrowingPotatoes/ProcessAnalysisEssayGrowingPotatoes_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file ProcessAnalysisEssayGrowingPotatoes_meta.xml contains metadata about this torrent s contents.</description>
<size>1866746</size>
</item><item>
<title>HighSchoolVsCollageNew</title>
<category>Paper</category>
<infohash>8c62db37e5b8dd7ffd84a8b650825b6dace8eb07</infohash>
<guid>https://academictorrents.com/details/8c62db37e5b8dd7ffd84a8b650825b6dace8eb07</guid>
<link>https://academictorrents.com/details/8c62db37e5b8dd7ffd84a8b650825b6dace8eb07</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/HighSchoolVsCollageNew Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/HighSchoolVsCollageNew/HighSchoolVsCollageNew_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file HighSchoolVsCollageNew_meta.xml contains metadata about this torrent s contents.</description>
<size>8269785</size>
</item><item>
<title>TheEffectsOfSmoking_201810</title>
<category>Paper</category>
<infohash>d9c88c3115d98494766edf866a776154a721bd43</infohash>
<guid>https://academictorrents.com/details/d9c88c3115d98494766edf866a776154a721bd43</guid>
<link>https://academictorrents.com/details/d9c88c3115d98494766edf866a776154a721bd43</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/TheEffectsOfSmoking_201810 Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/TheEffectsOfSmoking_201810/TheEffectsOfSmoking_201810_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file TheEffectsOfSmoking_201810_meta.xml contains metadata about this torrent s contents.</description>
<size>48406440</size>
</item><item>
<title>EPIC-KITCHENS-100</title>
<category>Dataset</category>
<infohash>cc2d9afabcbbe33686d2ecd9844b534e3a899f4b</infohash>
<guid>https://academictorrents.com/details/cc2d9afabcbbe33686d2ecd9844b534e3a899f4b</guid>
<link>https://academictorrents.com/details/cc2d9afabcbbe33686d2ecd9844b534e3a899f4b</link>
<description>EPIC-KITCHENS-100: Extended Footage for EPIC-KITCHENS dataset, to 100 hours of footage. 10.5523/bris.2g1n6qdydwa9u22shpxqzp0t8m 2020-09-10 N.b. please also see ERRATUM published at https://github.com/epic-kitchens/epic-kitchens-100-annotations/blob/master/README.md#erratum</description>
<size>795905917734</size>
</item><item>
<title>criticalthinking103rootingandjailbreaking</title>
<category>Paper</category>
<infohash>346144dd7df6212c067959986f50650d84eb61c2</infohash>
<guid>https://academictorrents.com/details/346144dd7df6212c067959986f50650d84eb61c2</guid>
<link>https://academictorrents.com/details/346144dd7df6212c067959986f50650d84eb61c2</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/criticalthinking103rootingandjailbreaking Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/criticalthinking103rootingandjailbreaking/criticalthinking103rootingandjailbreaking_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file criticalthinking103rootingandjailbreaking_meta.xml contains metadata about this torrent s contents.</description>
<size>116083130</size>
</item><item>
<title>criticalthinking65lakepointconsultingservices</title>
<category>Paper</category>
<infohash>4e45313be14bfdb03911823495fcbe863e84498a</infohash>
<guid>https://academictorrents.com/details/4e45313be14bfdb03911823495fcbe863e84498a</guid>
<link>https://academictorrents.com/details/4e45313be14bfdb03911823495fcbe863e84498a</link>
<description>This content hosted at the Internet Archive at https://archive.org/details/criticalthinking65lakepointconsultingservices Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/criticalthinking65lakepointconsultingservices/criticalthinking65lakepointconsultingservices_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file criticalthinking65lakepointconsultingservices_meta.xml contains metadata about this torrent s contents.</description>
<size>223478295</size>
</item><item>
<title>Diolkos photogrammetry dataset</title>
<category>Dataset</category>
<infohash>24faddc845c8ea502eabf2ffd097574671107f12</infohash>
<guid>https://academictorrents.com/details/24faddc845c8ea502eabf2ffd097574671107f12</guid>
<link>https://academictorrents.com/details/24faddc845c8ea502eabf2ffd097574671107f12</link>
<description/>
<size>4276195918</size>
</item><item>
<title>ratarmount indexes for PMC OpenAccess subset</title>
<category>Dataset</category>
<infohash>e95526a0bc4f39a5bbf423b24708d65fa4542d20</infohash>
<guid>https://academictorrents.com/details/e95526a0bc4f39a5bbf423b24708d65fa4542d20</guid>
<link>https://academictorrents.com/details/e95526a0bc4f39a5bbf423b24708d65fa4542d20</link>
<description>## the problem PMC Open Access bulk article (commercial and non-commercial) is a hefty set of files that weight in compressed at 79G and uncompressed at 388G. Archive decompression time in itself can take hours. A bittorrent mirror exists on: https://academictorrents.com/details/06d6badd7d1b0cfee00081c28fddd5e15e106165 ## the solution ratarmount (https://github.com/mxmlnkn/ratarmount), a python application, allows us to use FUSE (through fusepy) to mount a compressed archive as a disk, allowing us randomly access files in the archive as a disk without first decompression. To achieve good performance, it creates an index (an sqlite database per archive). This set of indexes still weight in at 1.4G uncompressed (345M compressed). ## usage * decompress all indexes in the same directory you ve downloaded oa_bulk * install ratarmount * use ratarmount to mount the oa_bulk archives on the disk a sample script    mount.sh    is provided as an example ## distribution we also use bittorrent to distribute the set of indexes.</description>
<size>361500928</size>
</item><item>
<title>PMC Open Access Subset</title>
<category>Dataset</category>
<infohash>06d6badd7d1b0cfee00081c28fddd5e15e106165</infohash>
<guid>https://academictorrents.com/details/06d6badd7d1b0cfee00081c28fddd5e15e106165</guid>
<link>https://academictorrents.com/details/06d6badd7d1b0cfee00081c28fddd5e15e106165</link>
<description>https://i.imgur.com/GBSDr8v.png mirror of ftp.ncbi.nlm.nih.gov:/pub/pmc/oa_bulk PubMed Central® (PMC) is a free full-text archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health s National Library of Medicine (NIH/NLM). https://www.ncbi.nlm.nih.gov/pmc/ The PMC Open Access Subset some or all openaccess content is a part of the total collection of articles in PMC. The articles in the OA Subset are made available under a Creative Commons or similar license that generally allows more liberal redistribution and reuse than a traditional copyrighted work. https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/</description>
<size>84144856912</size>
</item><item>
<title>MosMedData: Chest CT Scans with COVID-19 Related Findings  COVID19_1110 1.0</title>
<category>Dataset</category>
<infohash>f2175c4676e041ea65568bb70c2bcd15c7325fd2</infohash>
<guid>https://academictorrents.com/details/f2175c4676e041ea65568bb70c2bcd15c7325fd2</guid>
<link>https://academictorrents.com/details/f2175c4676e041ea65568bb70c2bcd15c7325fd2</link>
<description>This dataset contains anonymised human lung computed tomography (CT) scans with COVID-19 related findings, as well as without such findings. A small subset of studies has been annotated with binary pixel masks depicting regions of interests (ground-glass opacifications and consolidations). CT scans were obtained between 1st of March, 2020 and 25th of April, 2020, and provided by medical hospitals in Moscow, Russia. https://i.imgur.com/hLFBdBH.png ## Data Structure     . |&amp;mdash; dataset_registry.xlsx |&amp;mdash; LICENSE |&amp;mdash; README_EN.md |&amp;mdash; README_RU.md |&amp;mdash; README_EN.pdf |&amp;mdash; README_RU.pdf |&amp;mdash; masks |   |&amp;mdash; study_BBBB_mask.nii.gz |   |&amp;mdash; ... |    &amp;mdash; study_BBBB_mask.nii.gz  &amp;mdash; studies |&amp;mdash; CT-0 |   |&amp;mdash; study_BBBB.nii.gz |   |&amp;mdash; ... |    &amp;mdash; study_BBBB.nii.gz |&amp;mdash; CT-1 |   |&amp;mdash; study_BBBB.nii.gz |   |&amp;mdash; ... |    &amp;mdash; study_BBBB.nii.gz |&amp;mdash; CT-2 |   |&amp;mdash; study_BBBB.nii.gz |   |&amp;mdash; ... |    &amp;mdash; study_BBBB.nii.gz |&amp;mdash; CT-3 |   |&amp;mdash; study_BBBB.nii.gz |   |&amp;mdash; ... |    &amp;mdash; study_BBBB.nii.gz  &amp;mdash; CT-4 |&amp;mdash; study_BBBB.nii.gz |&amp;mdash; ...  &amp;mdash; study_BBBB.nii.gz     *  README_EN.md  and  README_RU.md  contain general information about the dataset; they have been saved in  Markdown  format in English and Russian languages, respectively.  README_EN.pdf  and  README_RU.pdf  contain the same information but have been saved in  PDF  format for the ease of convenience. *  LICENSE  file contains full description of Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported (CC BY-NC-ND 3.0) License *  dataset_registry.xlsx  is a spreadsheet with full list of studies included in the dataset as well as relative paths to a study file and to a binary mask, if present. *  studies  directory contains directories named as  CT-0 ,  CT-1 ,  CT-2 ,  CT-3 , and  CT-4  (for more information see below). Each directory contains studies in  NIfTI  format, that have been saved in  Gzip  archive. Each study has a unique name like  study_BBBB.nii.gz , where  BBBB  is a sequential number of the study in the whole dataset. *  masks  directory contains binary pixel masks in  NIfTI  format, that have been saved in  Gzip  archive. Each study has a unique name like  study_BBBB_mask.nii.gz , where  BBBB  is a number of the corresponding study. ## Data Overview | Property | Value | | :&amp;mdash;- | :&amp;mdash;- | | Number of studies, pcs. | 1110 | | Number of patients, ppl. | 1110 | | Distribution by sex, % (M/ F/ O) | 42/ 56/ 2 | | Distribution by age, years (min./ median/ max.) | 18/ 47/ 97 | | Number of binary pixel masks (Class A Annotation), pcs. | 50 | | Number of studies in each category (Class C Annotation), psc. (CT–0/ CT–1/ CT–2/ CT–3/ CT–4) | 254/ 684/ 125/ 45/ 2 | ### Data Preprocessing * Each study corresponds to unique patient. * Each study is represented by one series of images reconstructed into soft tissue mediastinal window.     SeriesDescription LIKE  %BODY%      * During the  DICOM -to- NIfTI  formatting process only every 10th image (Instance) was preserved.</description>
<size>11861379395</size>
</item><item>
<title>MICCAI_BraTS_2019_Data_Training</title>
<category>Dataset</category>
<infohash>82cef583fa17480b0f9a6342591d01dc67abe055</infohash>
<guid>https://academictorrents.com/details/82cef583fa17480b0f9a6342591d01dc67abe055</guid>
<link>https://academictorrents.com/details/82cef583fa17480b0f9a6342591d01dc67abe055</link>
<description>https://i.imgur.com/iONFbKt.gif Volume list (count: 335)     HGG/BraTS19_2013_11_1 HGG/BraTS19_2013_12_1 HGG/BraTS19_2013_13_1 HGG/BraTS19_2013_14_1 HGG/BraTS19_2013_17_1 HGG/BraTS19_2013_18_1 HGG/BraTS19_2013_19_1 HGG/BraTS19_2013_20_1 HGG/BraTS19_2013_2_1 HGG/BraTS19_2013_22_1 HGG/BraTS19_2013_23_1 HGG/BraTS19_2013_25_1 HGG/BraTS19_2013_26_1 HGG/BraTS19_2013_27_1 HGG/BraTS19_2013_3_1 HGG/BraTS19_2013_21_1 HGG/BraTS19_2013_4_1 HGG/BraTS19_2013_5_1 HGG/BraTS19_2013_7_1 HGG/BraTS19_CBICA_AAB_1 HGG/BraTS19_CBICA_AAG_1 HGG/BraTS19_CBICA_AAL_1 HGG/BraTS19_CBICA_AAP_1 HGG/BraTS19_CBICA_ABB_1 HGG/BraTS19_CBICA_ABE_1 HGG/BraTS19_CBICA_ABM_1 HGG/BraTS19_CBICA_ABN_1 HGG/BraTS19_CBICA_ABO_1 HGG/BraTS19_CBICA_ABY_1 HGG/BraTS19_CBICA_ALN_1 HGG/BraTS19_CBICA_ALU_1 HGG/BraTS19_CBICA_ALX_1 HGG/BraTS19_CBICA_AME_1 HGG/BraTS19_CBICA_AMH_1 HGG/BraTS19_CBICA_ANG_1 HGG/BraTS19_CBICA_ANI_1 HGG/BraTS19_CBICA_ANP_1 HGG/BraTS19_CBICA_ANZ_1 HGG/BraTS19_CBICA_AOD_1 HGG/BraTS19_CBICA_AOH_1 HGG/BraTS19_CBICA_AOO_1 HGG/BraTS19_CBICA_AOP_1 HGG/BraTS19_CBICA_AOZ_1 HGG/BraTS19_CBICA_APR_1 HGG/BraTS19_CBICA_APY_1 HGG/BraTS19_CBICA_APZ_1 HGG/BraTS19_CBICA_AQA_1 HGG/BraTS19_CBICA_AQD_1 HGG/BraTS19_CBICA_AQG_1 HGG/BraTS19_CBICA_AQJ_1 HGG/BraTS19_CBICA_AQN_1 HGG/BraTS19_CBICA_AQO_1 HGG/BraTS19_CBICA_AQP_1 HGG/BraTS19_CBICA_AQQ_1 HGG/BraTS19_CBICA_AQR_1 HGG/BraTS19_CBICA_AQT_1 HGG/BraTS19_CBICA_AQU_1 HGG/BraTS19_CBICA_AQV_1 HGG/BraTS19_CBICA_AQY_1 HGG/BraTS19_CBICA_AQZ_1 HGG/BraTS19_CBICA_ARF_1 HGG/BraTS19_CBICA_ARW_1 HGG/BraTS19_CBICA_ARZ_1 HGG/BraTS19_CBICA_ASA_1 HGG/BraTS19_CBICA_ASE_1 HGG/BraTS19_CBICA_ASG_1 HGG/BraTS19_CBICA_ASH_1 HGG/BraTS19_CBICA_ASK_1 HGG/BraTS19_CBICA_ASN_1 HGG/BraTS19_CBICA_ASO_1 HGG/BraTS19_CBICA_ASU_1 HGG/BraTS19_CBICA_ASV_1 HGG/BraTS19_CBICA_ASW_1 HGG/BraTS19_CBICA_ASY_1 HGG/BraTS19_CBICA_ATB_1 HGG/BraTS19_CBICA_ATD_1 HGG/BraTS19_CBICA_ATF_1 HGG/BraTS19_CBICA_ATP_1 HGG/BraTS19_CBICA_ATV_1 HGG/BraTS19_CBICA_ATX_1 HGG/BraTS19_CBICA_AUN_1 HGG/BraTS19_CBICA_AUQ_1 HGG/BraTS19_CBICA_AUR_1 HGG/BraTS19_CBICA_AVG_1 HGG/BraTS19_CBICA_AVJ_1 HGG/BraTS19_CBICA_AVV_1 HGG/BraTS19_CBICA_AWG_1 HGG/BraTS19_CBICA_AWH_1 HGG/BraTS19_CBICA_AWI_1 HGG/BraTS19_CBICA_AXJ_1 HGG/BraTS19_CBICA_AXL_1 HGG/BraTS19_CBICA_AXM_1 HGG/BraTS19_CBICA_AXN_1 HGG/BraTS19_CBICA_AXO_1 HGG/BraTS19_CBICA_AXQ_1 HGG/BraTS19_CBICA_AXW_1 HGG/BraTS19_CBICA_AYA_1 HGG/BraTS19_CBICA_AYI_1 HGG/BraTS19_CBICA_AYU_1 HGG/BraTS19_CBICA_AYW_1 HGG/BraTS19_CBICA_AZD_1 HGG/BraTS19_CBICA_AZH_1 HGG/BraTS19_CBICA_BFB_1 HGG/BraTS19_CBICA_BFP_1 HGG/BraTS19_CBICA_BHB_1 HGG/BraTS19_CBICA_BHK_1 HGG/BraTS19_CBICA_BHM_1 HGG/BraTS19_TCIA01_147_1 HGG/BraTS19_TCIA01_150_1 HGG/BraTS19_TCIA01_180_1 HGG/BraTS19_TCIA01_186_1 HGG/BraTS19_TCIA01_190_1 HGG/BraTS19_TCIA01_201_1 HGG/BraTS19_TCIA01_203_1 HGG/BraTS19_TCIA01_221_1 HGG/BraTS19_TCIA01_231_1 HGG/BraTS19_CBICA_AOS_1 HGG/BraTS19_TCIA01_235_1 HGG/BraTS19_TCIA01_335_1 HGG/BraTS19_TCIA01_378_1 HGG/BraTS19_TCIA01_390_1 HGG/BraTS19_TCIA01_401_1 HGG/BraTS19_TCIA01_411_1 HGG/BraTS19_TCIA01_412_1 HGG/BraTS19_TCIA01_425_1 HGG/BraTS19_TCIA01_429_1 HGG/BraTS19_TCIA01_448_1 HGG/BraTS19_TCIA01_460_1 HGG/BraTS19_TCIA01_499_1 HGG/BraTS19_TCIA02_117_1 HGG/BraTS19_TCIA02_118_1 HGG/BraTS19_TCIA02_135_1 HGG/BraTS19_TCIA02_151_1 HGG/BraTS19_TCIA02_168_1 HGG/BraTS19_TCIA02_171_1 HGG/BraTS19_TCIA02_179_1 HGG/BraTS19_TCIA02_198_1 HGG/BraTS19_TCIA02_208_1 HGG/BraTS19_TCIA02_222_1 HGG/BraTS19_TCIA02_226_1 HGG/BraTS19_TCIA02_274_1 HGG/BraTS19_TCIA02_283_1 HGG/BraTS19_TCIA02_290_1 HGG/BraTS19_TCIA02_300_1 HGG/BraTS19_TCIA02_309_1 HGG/BraTS19_TCIA02_314_1 HGG/BraTS19_TCIA02_321_1 HGG/BraTS19_TCIA02_322_1 HGG/BraTS19_TCIA02_331_1 HGG/BraTS19_TCIA02_368_1 HGG/BraTS19_TCIA02_370_1 HGG/BraTS19_TCIA02_374_1 HGG/BraTS19_TCIA02_377_1 HGG/BraTS19_TCIA02_394_1 HGG/BraTS19_TCIA02_430_1 HGG/BraTS19_TCIA02_455_1 HGG/BraTS19_TCIA02_471_1 HGG/BraTS19_TCIA02_473_1 HGG/BraTS19_TCIA02_491_1 HGG/BraTS19_TCIA02_605_1 HGG/BraTS19_TCIA02_606_1 HGG/BraTS19_TCIA02_607_1 HGG/BraTS19_TCIA02_608_1 HGG/BraTS19_TCIA03_121_1 HGG/BraTS19_TCIA03_133_1 HGG/BraTS19_TCIA03_138_1 HGG/BraTS19_TCIA03_199_1 HGG/BraTS19_TCIA03_257_1 HGG/BraTS19_TCIA03_265_1 HGG/BraTS19_TCIA03_296_1 HGG/BraTS19_TCIA03_338_1 HGG/BraTS19_TCIA03_375_1 HGG/BraTS19_TCIA03_419_1 HGG/BraTS19_TCIA03_474_1 HGG/BraTS19_TCIA03_498_1 HGG/BraTS19_TCIA04_111_1 HGG/BraTS19_TCIA04_149_1 HGG/BraTS19_TCIA04_192_1 HGG/BraTS19_TCIA04_328_1 HGG/BraTS19_TCIA04_343_1 HGG/BraTS19_TCIA04_361_1 HGG/BraTS19_TCIA04_437_1 HGG/BraTS19_TCIA04_479_1 HGG/BraTS19_TCIA05_277_1 HGG/BraTS19_TCIA05_396_1 HGG/BraTS19_TCIA05_444_1 HGG/BraTS19_TCIA05_478_1 HGG/BraTS19_TCIA06_165_1 HGG/BraTS19_TCIA06_184_1 HGG/BraTS19_TCIA06_211_1 HGG/BraTS19_TCIA06_247_1 HGG/BraTS19_TCIA06_332_1 HGG/BraTS19_TCIA06_372_1 HGG/BraTS19_TCIA06_409_1 HGG/BraTS19_TCIA06_603_1 HGG/BraTS19_TCIA08_105_1 HGG/BraTS19_TCIA08_113_1 HGG/BraTS19_TCIA08_162_1 HGG/BraTS19_TCIA08_167_1 HGG/BraTS19_TCIA08_205_1 HGG/BraTS19_TCIA08_218_1 HGG/BraTS19_TCIA08_234_1 HGG/BraTS19_TCIA08_242_1 HGG/BraTS19_TCIA08_278_1 HGG/BraTS19_TCIA08_280_1 HGG/BraTS19_TCIA08_319_1 HGG/BraTS19_TCIA08_406_1 HGG/BraTS19_TCIA08_436_1 HGG/BraTS19_TCIA08_469_1 HGG/BraTS19_CBICA_ANV_1 HGG/BraTS19_CBICA_AOC_1 HGG/BraTS19_2013_10_1 HGG/BraTS19_TCIA01_131_1 HGG/BraTS19_CBICA_APK_1 HGG/BraTS19_CBICA_ASF_1 HGG/BraTS19_CBICA_ASR_1 HGG/BraTS19_CBICA_ATN_1 HGG/BraTS19_CBICA_AUA_1 HGG/BraTS19_CBICA_AUW_1 HGG/BraTS19_CBICA_AUX_1 HGG/BraTS19_CBICA_AVB_1 HGG/BraTS19_CBICA_AVF_1 HGG/BraTS19_CBICA_AVT_1 HGG/BraTS19_CBICA_AWV_1 HGG/BraTS19_CBICA_AWX_1 HGG/BraTS19_CBICA_AYC_1 HGG/BraTS19_CBICA_AYG_1 HGG/BraTS19_CBICA_BAN_1 HGG/BraTS19_CBICA_BAP_1 HGG/BraTS19_CBICA_BAX_1 HGG/BraTS19_CBICA_BBG_1 HGG/BraTS19_CBICA_BCF_1 HGG/BraTS19_CBICA_BCL_1 HGG/BraTS19_CBICA_BDK_1 HGG/BraTS19_CBICA_BEM_1 HGG/BraTS19_CBICA_BGE_1 HGG/BraTS19_CBICA_BGG_1 HGG/BraTS19_CBICA_BGN_1 HGG/BraTS19_CBICA_BGO_1 HGG/BraTS19_CBICA_BGR_1 HGG/BraTS19_CBICA_BGT_1 HGG/BraTS19_CBICA_BGW_1 HGG/BraTS19_CBICA_BGX_1 HGG/BraTS19_CBICA_BHQ_1 HGG/BraTS19_CBICA_BHV_1 HGG/BraTS19_CBICA_BHZ_1 HGG/BraTS19_CBICA_BIC_1 HGG/BraTS19_CBICA_BJY_1 HGG/BraTS19_CBICA_BKV_1 HGG/BraTS19_CBICA_BLJ_1 HGG/BraTS19_CBICA_BNR_1 HGG/BraTS19_TMC_06290_1 HGG/BraTS19_TMC_06643_1 HGG/BraTS19_TMC_11964_1 HGG/BraTS19_TMC_12866_1 HGG/BraTS19_TMC_15477_1 HGG/BraTS19_TMC_21360_1 HGG/BraTS19_TMC_27374_1 HGG/BraTS19_TMC_30014_1 LGG/BraTS19_2013_1_1 LGG/BraTS19_2013_16_1 LGG/BraTS19_2013_24_1 LGG/BraTS19_2013_15_1 LGG/BraTS19_2013_28_1 LGG/BraTS19_2013_29_1 LGG/BraTS19_2013_6_1 LGG/BraTS19_2013_8_1 LGG/BraTS19_2013_9_1 LGG/BraTS19_TCIA09_177_1 LGG/BraTS19_TCIA09_254_1 LGG/BraTS19_TCIA09_255_1 LGG/BraTS19_TCIA09_312_1 LGG/BraTS19_TCIA09_402_1 LGG/BraTS19_TCIA09_428_1 LGG/BraTS19_TCIA09_451_1 LGG/BraTS19_TCIA09_462_1 LGG/BraTS19_TCIA09_493_1 LGG/BraTS19_TCIA09_620_1 LGG/BraTS19_TCIA10_103_1 LGG/BraTS19_TCIA10_109_1 LGG/BraTS19_TCIA10_130_1 LGG/BraTS19_TCIA10_152_1 LGG/BraTS19_TCIA10_175_1 LGG/BraTS19_TCIA10_202_1 LGG/BraTS19_TCIA10_241_1 LGG/BraTS19_TCIA10_261_1 LGG/BraTS19_TCIA10_266_1 LGG/BraTS19_TCIA10_276_1 LGG/BraTS19_TCIA10_282_1 LGG/BraTS19_TCIA10_299_1 LGG/BraTS19_TCIA10_307_1 LGG/BraTS19_TCIA10_310_1 LGG/BraTS19_TCIA10_325_1 LGG/BraTS19_TCIA10_330_1 LGG/BraTS19_TCIA10_346_1 LGG/BraTS19_TCIA10_351_1 LGG/BraTS19_TCIA10_387_1 LGG/BraTS19_TCIA10_393_1 LGG/BraTS19_TCIA10_408_1 LGG/BraTS19_TCIA10_410_1 LGG/BraTS19_TCIA10_413_1 LGG/BraTS19_TCIA10_420_1 LGG/BraTS19_TCIA10_442_1 LGG/BraTS19_TCIA10_449_1 LGG/BraTS19_TCIA10_490_1 LGG/BraTS19_TCIA10_625_1 LGG/BraTS19_TCIA10_628_1 LGG/BraTS19_TCIA10_629_1 LGG/BraTS19_TCIA10_632_1 LGG/BraTS19_TCIA10_637_1 LGG/BraTS19_TCIA10_639_1 LGG/BraTS19_TCIA10_640_1 LGG/BraTS19_TCIA10_644_1 LGG/BraTS19_TCIA12_101_1 LGG/BraTS19_TCIA12_249_1 LGG/BraTS19_TCIA12_298_1 LGG/BraTS19_TCIA12_466_1 LGG/BraTS19_TCIA12_470_1 LGG/BraTS19_TCIA12_480_1 LGG/BraTS19_TCIA13_615_1 LGG/BraTS19_TCIA13_618_1 LGG/BraTS19_TCIA13_621_1 LGG/BraTS19_TCIA13_623_1 LGG/BraTS19_TCIA13_624_1 LGG/BraTS19_TCIA13_630_1 LGG/BraTS19_TCIA13_633_1 LGG/BraTS19_TCIA13_634_1 LGG/BraTS19_TCIA13_642_1 LGG/BraTS19_TCIA13_645_1 LGG/BraTS19_TCIA13_650_1 LGG/BraTS19_TCIA13_653_1 LGG/BraTS19_TCIA13_654_1 LGG/BraTS19_TMC_09043_1 LGG/BraTS19_2013_0_1 LGG/BraTS19_TCIA09_141_1</description>
<size>2759083974</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 4 CCD 4</title>
<category>Dataset</category>
<infohash>fe498d036d9370751edd5e2ad9be3c64c9ced18a</infohash>
<guid>https://academictorrents.com/details/fe498d036d9370751edd5e2ad9be3c64c9ced18a</guid>
<link>https://academictorrents.com/details/fe498d036d9370751edd5e2ad9be3c64c9ced18a</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 4 CCD 3</title>
<category>Dataset</category>
<infohash>8f095de198c6a704af887a1e3f43de05384b26f6</infohash>
<guid>https://academictorrents.com/details/8f095de198c6a704af887a1e3f43de05384b26f6</guid>
<link>https://academictorrents.com/details/8f095de198c6a704af887a1e3f43de05384b26f6</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 4 CCD 2</title>
<category>Dataset</category>
<infohash>d86eaeb53d0e15b615e194408b5a3b57da3d8326</infohash>
<guid>https://academictorrents.com/details/d86eaeb53d0e15b615e194408b5a3b57da3d8326</guid>
<link>https://academictorrents.com/details/d86eaeb53d0e15b615e194408b5a3b57da3d8326</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 4 CCD 1</title>
<category>Dataset</category>
<infohash>7d33d30fdc1e781472cc80ab84e2ff83029bb2bf</infohash>
<guid>https://academictorrents.com/details/7d33d30fdc1e781472cc80ab84e2ff83029bb2bf</guid>
<link>https://academictorrents.com/details/7d33d30fdc1e781472cc80ab84e2ff83029bb2bf</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 3 CCD 4</title>
<category>Dataset</category>
<infohash>a48f78122a3ad78efe622ae576410a971bc38972</infohash>
<guid>https://academictorrents.com/details/a48f78122a3ad78efe622ae576410a971bc38972</guid>
<link>https://academictorrents.com/details/a48f78122a3ad78efe622ae576410a971bc38972</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 3 CCD 3</title>
<category>Dataset</category>
<infohash>5c0b524a170ce7b2a183db8b4e2a2432f7f08056</infohash>
<guid>https://academictorrents.com/details/5c0b524a170ce7b2a183db8b4e2a2432f7f08056</guid>
<link>https://academictorrents.com/details/5c0b524a170ce7b2a183db8b4e2a2432f7f08056</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 3 CCD 2</title>
<category>Dataset</category>
<infohash>90d03e9b4345a4debf30e654ce07b259ba978006</infohash>
<guid>https://academictorrents.com/details/90d03e9b4345a4debf30e654ce07b259ba978006</guid>
<link>https://academictorrents.com/details/90d03e9b4345a4debf30e654ce07b259ba978006</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 3 CCD 1</title>
<category>Dataset</category>
<infohash>fdcb6263ca9ea691baed21c76a656ca1a1f7f995</infohash>
<guid>https://academictorrents.com/details/fdcb6263ca9ea691baed21c76a656ca1a1f7f995</guid>
<link>https://academictorrents.com/details/fdcb6263ca9ea691baed21c76a656ca1a1f7f995</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 2 CCD 4</title>
<category>Dataset</category>
<infohash>e94c2653a7ca3469e9c4a1b15890bea6ae74e53b</infohash>
<guid>https://academictorrents.com/details/e94c2653a7ca3469e9c4a1b15890bea6ae74e53b</guid>
<link>https://academictorrents.com/details/e94c2653a7ca3469e9c4a1b15890bea6ae74e53b</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 2 CCD 3</title>
<category>Dataset</category>
<infohash>345389a62fc948c4f0e496af2985417d58ae52bb</infohash>
<guid>https://academictorrents.com/details/345389a62fc948c4f0e496af2985417d58ae52bb</guid>
<link>https://academictorrents.com/details/345389a62fc948c4f0e496af2985417d58ae52bb</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 2 CCD 2</title>
<category>Dataset</category>
<infohash>1345165e6434e1ea6a71089780548239b74ac99b</infohash>
<guid>https://academictorrents.com/details/1345165e6434e1ea6a71089780548239b74ac99b</guid>
<link>https://academictorrents.com/details/1345165e6434e1ea6a71089780548239b74ac99b</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44018749440</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 2 CCD 1</title>
<category>Dataset</category>
<infohash>e3e3bb87b3fd29b6f0f5b031f1c8dc4a7f3f0e2c</infohash>
<guid>https://academictorrents.com/details/e3e3bb87b3fd29b6f0f5b031f1c8dc4a7f3f0e2c</guid>
<link>https://academictorrents.com/details/e3e3bb87b3fd29b6f0f5b031f1c8dc4a7f3f0e2c</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44008191360</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 1 CCD 4</title>
<category>Dataset</category>
<infohash>a2fd85c4f45fbf4e086387e97a02d4a47951dae3</infohash>
<guid>https://academictorrents.com/details/a2fd85c4f45fbf4e086387e97a02d4a47951dae3</guid>
<link>https://academictorrents.com/details/a2fd85c4f45fbf4e086387e97a02d4a47951dae3</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44006739840</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 1 CCD 3</title>
<category>Dataset</category>
<infohash>b7c5306e02b57bc8dc7a54f8f5b0ab05f756d672</infohash>
<guid>https://academictorrents.com/details/b7c5306e02b57bc8dc7a54f8f5b0ab05f756d672</guid>
<link>https://academictorrents.com/details/b7c5306e02b57bc8dc7a54f8f5b0ab05f756d672</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44006728320</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 1 CCD 2</title>
<category>Dataset</category>
<infohash>9080cbd5f6cfb91e1894397cd74b837cec3c8cee</infohash>
<guid>https://academictorrents.com/details/9080cbd5f6cfb91e1894397cd74b837cec3c8cee</guid>
<link>https://academictorrents.com/details/9080cbd5f6cfb91e1894397cd74b837cec3c8cee</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44007166080</size>
</item><item>
<title>TESS Sector 23 Calibrated FFI - Camera 1 CCD 1</title>
<category>Dataset</category>
<infohash>61581a8869b6edb143e64f205a983c218e2d0a51</infohash>
<guid>https://academictorrents.com/details/61581a8869b6edb143e64f205a983c218e2d0a51</guid>
<link>https://academictorrents.com/details/61581a8869b6edb143e64f205a983c218e2d0a51</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-23 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44007177600</size>
</item><item>
<title>The Representativeness of Automated Web Crawls as a Surrogate for Human Browsing</title>
<category>Dataset</category>
<infohash>5e9ef2b5531ce3b965681be6eccab1fbd114af62</infohash>
<guid>https://academictorrents.com/details/5e9ef2b5531ce3b965681be6eccab1fbd114af62</guid>
<link>https://academictorrents.com/details/5e9ef2b5531ce3b965681be6eccab1fbd114af62</link>
<description>Large-scale Web crawls have emerged as the state of the art for studying characteristics of the Web. In particular, they are a core tool for online tracking research. Web crawling is an attractive approach to data collection, as crawls can be run at relatively low infrastructure cost and don’t require handling sensitive user data such as browsing histories. However, the biases introduced by using crawls as a proxy for human browsing data have not been well studied. Crawls may fail to capture the diversity of user environments, and the snapshot view of the Web presented by one-time crawls does not reflect its constantly evolving nature, which hinders reproducibility of crawl-based studies. In this paper, we quantify the repeatability and representativeness of Web crawls in terms of common tracking and fingerprinting metrics, considering both variation across crawls and divergence from human browser usage. We quantify baseline variation of simultaneous crawls, then isolate the effects of time, cloud IP address vs. residential, and operating system. This provides a foundation to assess the agreement between crawls visiting a standard list of high-traffic websites and actual browsing behaviour measured from an opt-in sample of over 50,000 users of the Firefox Web browser. Our analysis reveals differences between the treatment of stateless crawling infrastructure and generally stateful human browsing, showing, for example, that crawlers tend to experience higher rates of third-party activity than human browser users on loading pages from the same domains.</description>
<size>63789293757</size>
</item><item>
<title>TrackingNet: Test</title>
<category>Dataset</category>
<infohash>76edd91bab320689666c9697b009646f81c219dc</infohash>
<guid>https://academictorrents.com/details/76edd91bab320689666c9697b009646f81c219dc</guid>
<link>https://academictorrents.com/details/76edd91bab320689666c9697b009646f81c219dc</link>
<description/>
<size>35020146159</size>
</item><item>
<title>TrackingNet: Train_11</title>
<category>Dataset</category>
<infohash>4baf3f4fbc80a4bacbed088ad6e5599cf42a28b1</infohash>
<guid>https://academictorrents.com/details/4baf3f4fbc80a4bacbed088ad6e5599cf42a28b1</guid>
<link>https://academictorrents.com/details/4baf3f4fbc80a4bacbed088ad6e5599cf42a28b1</link>
<description/>
<size>92978301560</size>
</item><item>
<title>TrackingNet: Train_10</title>
<category>Dataset</category>
<infohash>9227c7efee8899aa52c83d9c6b6f4dee8f5c5c54</infohash>
<guid>https://academictorrents.com/details/9227c7efee8899aa52c83d9c6b6f4dee8f5c5c54</guid>
<link>https://academictorrents.com/details/9227c7efee8899aa52c83d9c6b6f4dee8f5c5c54</link>
<description/>
<size>92158091077</size>
</item><item>
<title>TrackingNet: Train_9</title>
<category>Dataset</category>
<infohash>ec85a6e1bebd206bf6e10f646ffe71a7cd4899fd</infohash>
<guid>https://academictorrents.com/details/ec85a6e1bebd206bf6e10f646ffe71a7cd4899fd</guid>
<link>https://academictorrents.com/details/ec85a6e1bebd206bf6e10f646ffe71a7cd4899fd</link>
<description/>
<size>88133056354</size>
</item><item>
<title>TrackingNet: Train_8</title>
<category>Dataset</category>
<infohash>5e6616a435f99659f29a72e201081a474d8b9b0f</infohash>
<guid>https://academictorrents.com/details/5e6616a435f99659f29a72e201081a474d8b9b0f</guid>
<link>https://academictorrents.com/details/5e6616a435f99659f29a72e201081a474d8b9b0f</link>
<description/>
<size>96426130552</size>
</item><item>
<title>TrackingNet: Train_7</title>
<category>Dataset</category>
<infohash>201cc2cbc8d5e2a13572b19ddc4fb73146796f52</infohash>
<guid>https://academictorrents.com/details/201cc2cbc8d5e2a13572b19ddc4fb73146796f52</guid>
<link>https://academictorrents.com/details/201cc2cbc8d5e2a13572b19ddc4fb73146796f52</link>
<description/>
<size>89861501664</size>
</item><item>
<title>TrackingNet: Train_6</title>
<category>Dataset</category>
<infohash>caa8e95c4f0f7508710bc1524a51338e598b59aa</infohash>
<guid>https://academictorrents.com/details/caa8e95c4f0f7508710bc1524a51338e598b59aa</guid>
<link>https://academictorrents.com/details/caa8e95c4f0f7508710bc1524a51338e598b59aa</link>
<description/>
<size>94851382878</size>
</item><item>
<title>TrackingNet: Train_5</title>
<category>Dataset</category>
<infohash>17fd3086953a73710d5535e1987de2dcdb4300db</infohash>
<guid>https://academictorrents.com/details/17fd3086953a73710d5535e1987de2dcdb4300db</guid>
<link>https://academictorrents.com/details/17fd3086953a73710d5535e1987de2dcdb4300db</link>
<description/>
<size>92637106349</size>
</item><item>
<title>TrackingNet: Train_4</title>
<category>Dataset</category>
<infohash>57b2861f4ef6729d25e24db270b0db06a2056e5d</infohash>
<guid>https://academictorrents.com/details/57b2861f4ef6729d25e24db270b0db06a2056e5d</guid>
<link>https://academictorrents.com/details/57b2861f4ef6729d25e24db270b0db06a2056e5d</link>
<description/>
<size>92270128173</size>
</item><item>
<title>TrackingNet: Train_3</title>
<category>Dataset</category>
<infohash>6c6c797d78ae0713cdef2324b1ede41b225c45c6</infohash>
<guid>https://academictorrents.com/details/6c6c797d78ae0713cdef2324b1ede41b225c45c6</guid>
<link>https://academictorrents.com/details/6c6c797d78ae0713cdef2324b1ede41b225c45c6</link>
<description/>
<size>86895458634</size>
</item><item>
<title>TrackingNet: Train_2</title>
<category>Dataset</category>
<infohash>4f4ba0c341a29da9104e3fd326e6036c61694399</infohash>
<guid>https://academictorrents.com/details/4f4ba0c341a29da9104e3fd326e6036c61694399</guid>
<link>https://academictorrents.com/details/4f4ba0c341a29da9104e3fd326e6036c61694399</link>
<description/>
<size>95289711187</size>
</item><item>
<title>TrackingNet: Train_1</title>
<category>Dataset</category>
<infohash>4bf5654081416a38dcd4d31c17e59a6ced8d838c</infohash>
<guid>https://academictorrents.com/details/4bf5654081416a38dcd4d31c17e59a6ced8d838c</guid>
<link>https://academictorrents.com/details/4bf5654081416a38dcd4d31c17e59a6ced8d838c</link>
<description/>
<size>90232360330</size>
</item><item>
<title>TrackingNet: Train_0</title>
<category>Dataset</category>
<infohash>3ba4d1ce2ec3a2e1e0672097f42010ccdb2036cb</infohash>
<guid>https://academictorrents.com/details/3ba4d1ce2ec3a2e1e0672097f42010ccdb2036cb</guid>
<link>https://academictorrents.com/details/3ba4d1ce2ec3a2e1e0672097f42010ccdb2036cb</link>
<description/>
<size>91521666278</size>
</item><item>
<title>OPUS Russian Open Speech To Text Dataset v1.01</title>
<category>Dataset</category>
<infohash>95b4cab0f99850e119114c8b6df00193ab5fa34f</infohash>
<guid>https://academictorrents.com/details/95b4cab0f99850e119114c8b6df00193ab5fa34f</guid>
<link>https://academictorrents.com/details/95b4cab0f99850e119114c8b6df00193ab5fa34f</link>
<description>v1.0-beta Arguably the largest public Russian STT dataset up to date: 15m utterances; 20 000 hours; 2.3 TB (in mono .wav format in int16); For more information please visit  https://github.com/snakers4/open_stt/</description>
<size>381530620667</size>
</item><item>
<title>100 Images from Hubble Space Telescope</title>
<category>Dataset</category>
<infohash>1c28ff99b2f7a84d57d38b48211bfedf513915f5</infohash>
<guid>https://academictorrents.com/details/1c28ff99b2f7a84d57d38b48211bfedf513915f5</guid>
<link>https://academictorrents.com/details/1c28ff99b2f7a84d57d38b48211bfedf513915f5</link>
<description>This Dataset contains 100 images in original size from Hubble Space Telescope. All images are based on CC-4 license and you can read more about that here: https://www.spacetelescope.org/copyright/</description>
<size>10049374520</size>
</item><item>
<title>PADCHEST_SJ (Resized 224x224) (Fixed cropping)</title>
<category>Dataset</category>
<infohash>96ebb4f92b85929eadfb16761f310a6d04105797</infohash>
<guid>https://academictorrents.com/details/96ebb4f92b85929eadfb16761f310a6d04105797</guid>
<link>https://academictorrents.com/details/96ebb4f92b85929eadfb16761f310a6d04105797</link>
<description>For use here: https://github.com/mlmed/torchxrayvision/blob/master/torchxrayvision/datasets.py#L472 Images are resized to 224x224 from the original dataset. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. https://i.imgur.com/MpVlYgB.png Padchest</description>
<size>12888277849</size>
</item><item>
<title>PANDAcap – SSH Honeypot Dataset</title>
<category>Dataset</category>
<infohash>4a3eadf47425cb60111ec224de272997294eec93</infohash>
<guid>https://academictorrents.com/details/4a3eadf47425cb60111ec224de272997294eec93</guid>
<link>https://academictorrents.com/details/4a3eadf47425cb60111ec224de272997294eec93</link>
<description># PANDAcap – SSH Honeypot Dataset ## Overview This is a dataset of **63 [PANDA][panda] traces**, collected using the [PANDAcap][pandacap] framework. The dataset aims to offer a starting point for the analysis of *ssh brute force attacks*. The traces were collected through the  course of approximately 3 days from 21 to 23 February 2020. A VM was configured using PANDAcap so that it accepts all passwords for user  root . When an ssh session starts for the user, PANDA is signaled by the [recctrl plugin][recctrl] to start recording for 30 . You can read more details about the experimental setup and an overview of the dataset **EuroSec 2020** publication. &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- [1] Manolis Stamatogiannakis, Herbert Bos, and Paul Groth. PANDAcap: A Framework for Streamlining Collection of Full-System Traces. In *Proceedings of the 13th European Workshop on Systems Security*, EuroSec  20, Heraklion, Greece, April 2020. doi: [10.1145/3380786.3391396][eurosec20-doi], preprint: [vusec.net][eurosec20-preprint] &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- ## Dataset layout The dataset is split in 3 zip files/directories: * **rr**: Contains the 63 PANDA traces of the dataset. The traces are in the upcoming RRArchive format. Note that PANDA support for the format is still wip at the time of writing (April 2020). If you need to downgrade to the traditional PANDA trace format, you can use the snippet we provide below. * **qcow**: Contains the QCOW base image ( ubuntu16-planb.qcow2 ) used to create the dataset, as well as the disk deltas for the 63 traces. These can be mounted to inspect the contents of the filesystem before and after each session. and disk deltas for the 63 traces. Quick instructions on how to mount and inspect a QCOW image can be found below. * **pcap**: Contains the pcap network traces for the sessions in the PANDA traces. These have been extracted using the PANDA [network plugin][network]. We decided to also include them in the dataset as standalone files for convenience. Additionally, we provide the PANDA linux kernel profile  ubuntu16-planb-kernelinfo.conf , which can be used to analyze the traces using the PANDA [osi_linux plugin][osi_linux]. If you wish to reuse the VM image in your project, it is also available as a standalone download through [academictorrents.com][at-vm-url], along with more detailed information on its contents. ## Handy snippets ### Convert traces to traditional PANDA format From inside the  rr  directory, run:    bash for f in *.tar.gz; do</description>
<size>33491872190</size>
</item><item>
<title>TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild</title>
<category>Dataset</category>
<infohash>1faf1b53cc0099d2206f02be42b5688952c3c6b3</infohash>
<guid>https://academictorrents.com/details/1faf1b53cc0099d2206f02be42b5688952c3c6b3</guid>
<link>https://academictorrents.com/details/1faf1b53cc0099d2206f02be42b5688952c3c6b3</link>
<description/>
<size>1138275041195</size>
</item><item>
<title>PANDAcap – SSH Honeypot VM</title>
<category>Dataset</category>
<infohash>39df3904460e909e175434cbd87764b8c487891d</infohash>
<guid>https://academictorrents.com/details/39df3904460e909e175434cbd87764b8c487891d</guid>
<link>https://academictorrents.com/details/39df3904460e909e175434cbd87764b8c487891d</link>
<description># PANDAcap – Ubuntu 16.04 QCOW ## Overview This is the [QCOW][qcow] disk image used in our **EuroSec 2020** publication about the **[PANDAcap][pandacap]** framework. &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- [1] Manolis Stamatogiannakis, Herbert Bos, and Paul Groth. PANDAcap: A Framework for Streamlining Collection of Full-System Traces. In *Proceedings of the 13th European Workshop on Systems Security*, EuroSec  20, Heraklion, Greece, April 2020. doi: [10.1145/3380786.3391396][eurosec20-doi] &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- ## Image details ### Generic information * Installed operating system: Ubuntu 16.04 LTS * Kernel image:  linux-image-4.4.0-130-generic  * Last software update: 17 Feb 2020 * Login credentials:  panda:panda  * The image has been scrubbed and compacted to reduce its size and make it ready for reuse in other projects. * A [PANDA][panda] kernel profile for use with the [osi_linux][osi_linux] plugin is included:  ubuntu16-planb-kernelinfo.conf  ### Modifications related to PANDAcap The image contains some modifications related to [PANDAcap][pandacap], as listed below. * [ recctrlu ][recctrlu] has been installed in  /usr/local/sbin . * [ recctrlu.sh ][recctrlu] has been installed in  /usr/local/bin . *  recctrlu.sh  has been hooked to  /etc/pam.d/sshd . If the PANDA [ recctrl ][recctrl] plugin is active, this will trigger PANDA to start recording after a successful ssh login. *  rc.local  will run  /root/usbbootstrap.sh  at boot-time. This will run runtime bootstrapping scripts when the image boots, and then clean-up after itself. ### Removing PANDAcap modifications The PANDAcap-related modification should not affect the use of the image for most other purposes. If needed, they can be removed as following.    bash sudo sed -i  /recctrlu.sh/d  /etc/pam.d/sshd sudo rm -f /usr/local/,sbin/recctrlu* sudo sed -i  /usbbootstrap.sh/d  /etc/rc.local sudo rm /root/usbbootstrap.sh     [eurosec20-doi]: https://doi.org/10.1145/3380786.3391396 [osi_linux]: https://github.com/panda-re/panda/tree/master/panda/plugins/osi_linux [panda]: https://github.com/panda-re/panda [pandacap]: https://github.com/vusec/pandacap [qcow]: https://en.wikipedia.org/wiki/Qcow [recctrl]: https://github.com/panda-re/panda/tree/master/panda/plugins/recctrl [recctrlu]: https://github.com/panda-re/panda/tree/master/panda/plugins/recctrl/utils</description>
<size>1925714703</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 4 CCD 4</title>
<category>Dataset</category>
<infohash>8f9b994dcd762a9b83d1140fad801a5dea923eaa</infohash>
<guid>https://academictorrents.com/details/8f9b994dcd762a9b83d1140fad801a5dea923eaa</guid>
<link>https://academictorrents.com/details/8f9b994dcd762a9b83d1140fad801a5dea923eaa</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 4 CCD 3</title>
<category>Dataset</category>
<infohash>e9dd92837c3d78339cd11a23ee7cef727c719041</infohash>
<guid>https://academictorrents.com/details/e9dd92837c3d78339cd11a23ee7cef727c719041</guid>
<link>https://academictorrents.com/details/e9dd92837c3d78339cd11a23ee7cef727c719041</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 4 CCD 2</title>
<category>Dataset</category>
<infohash>d79c54e6e275b08e69a3b0098524cb609ce5dd21</infohash>
<guid>https://academictorrents.com/details/d79c54e6e275b08e69a3b0098524cb609ce5dd21</guid>
<link>https://academictorrents.com/details/d79c54e6e275b08e69a3b0098524cb609ce5dd21</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 4 CCD 1</title>
<category>Dataset</category>
<infohash>ca53afd586cf196c1e29880b26fee99041760028</infohash>
<guid>https://academictorrents.com/details/ca53afd586cf196c1e29880b26fee99041760028</guid>
<link>https://academictorrents.com/details/ca53afd586cf196c1e29880b26fee99041760028</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 3 CCD 4</title>
<category>Dataset</category>
<infohash>90c5ef4538674ca966a5b130b68f10c2b2cb44a7</infohash>
<guid>https://academictorrents.com/details/90c5ef4538674ca966a5b130b68f10c2b2cb44a7</guid>
<link>https://academictorrents.com/details/90c5ef4538674ca966a5b130b68f10c2b2cb44a7</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 3 CCD 3</title>
<category>Dataset</category>
<infohash>ced4b17d67c9c97f4a9732c6e5d29cbc3676cec0</infohash>
<guid>https://academictorrents.com/details/ced4b17d67c9c97f4a9732c6e5d29cbc3676cec0</guid>
<link>https://academictorrents.com/details/ced4b17d67c9c97f4a9732c6e5d29cbc3676cec0</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 3 CCD 2</title>
<category>Dataset</category>
<infohash>83859ccce9a5e5dd518705a0b9913eaaf7564a68</infohash>
<guid>https://academictorrents.com/details/83859ccce9a5e5dd518705a0b9913eaaf7564a68</guid>
<link>https://academictorrents.com/details/83859ccce9a5e5dd518705a0b9913eaaf7564a68</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 3 CCD 1</title>
<category>Dataset</category>
<infohash>69550d6df5e0fc2614bc8a24151c2826abd1522a</infohash>
<guid>https://academictorrents.com/details/69550d6df5e0fc2614bc8a24151c2826abd1522a</guid>
<link>https://academictorrents.com/details/69550d6df5e0fc2614bc8a24151c2826abd1522a</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 2 CCD 4</title>
<category>Dataset</category>
<infohash>2cc694687d868ee70a451130e077e25ce5b83997</infohash>
<guid>https://academictorrents.com/details/2cc694687d868ee70a451130e077e25ce5b83997</guid>
<link>https://academictorrents.com/details/2cc694687d868ee70a451130e077e25ce5b83997</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 2 CCD 3</title>
<category>Dataset</category>
<infohash>a278f973ee354220e9a85954a66e84f0abafa3c5</infohash>
<guid>https://academictorrents.com/details/a278f973ee354220e9a85954a66e84f0abafa3c5</guid>
<link>https://academictorrents.com/details/a278f973ee354220e9a85954a66e84f0abafa3c5</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 2 CCD 2</title>
<category>Dataset</category>
<infohash>151ff5ddb36d033842db63d0759d4c980684f885</infohash>
<guid>https://academictorrents.com/details/151ff5ddb36d033842db63d0759d4c980684f885</guid>
<link>https://academictorrents.com/details/151ff5ddb36d033842db63d0759d4c980684f885</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44367252480</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 2 CCD 1</title>
<category>Dataset</category>
<infohash>91af3a90af5bebf27d4192636278419338f42c87</infohash>
<guid>https://academictorrents.com/details/91af3a90af5bebf27d4192636278419338f42c87</guid>
<link>https://academictorrents.com/details/91af3a90af5bebf27d4192636278419338f42c87</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 1 CCD 4</title>
<category>Dataset</category>
<infohash>8ff4bbfaa119931e7b27bcbd357258ae7b528ea7</infohash>
<guid>https://academictorrents.com/details/8ff4bbfaa119931e7b27bcbd357258ae7b528ea7</guid>
<link>https://academictorrents.com/details/8ff4bbfaa119931e7b27bcbd357258ae7b528ea7</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44366993280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 1 CCD 3</title>
<category>Dataset</category>
<infohash>c6be70d5d72cf10b3bcc24a428c3a3865cd2cda4</infohash>
<guid>https://academictorrents.com/details/c6be70d5d72cf10b3bcc24a428c3a3865cd2cda4</guid>
<link>https://academictorrents.com/details/c6be70d5d72cf10b3bcc24a428c3a3865cd2cda4</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363404800</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 1 CCD 2</title>
<category>Dataset</category>
<infohash>922fc8c6632df5b87184aff73a08be4a8ff5b4d3</infohash>
<guid>https://academictorrents.com/details/922fc8c6632df5b87184aff73a08be4a8ff5b4d3</guid>
<link>https://academictorrents.com/details/922fc8c6632df5b87184aff73a08be4a8ff5b4d3</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>TESS Sector 22 Calibrated FFI - Camera 1 CCD 1</title>
<category>Dataset</category>
<infohash>ca121f7c2af0f2b1cd7b89a57674d3d476c86d27</infohash>
<guid>https://academictorrents.com/details/ca121f7c2af0f2b1cd7b89a57674d3d476c86d27</guid>
<link>https://academictorrents.com/details/ca121f7c2af0f2b1cd7b89a57674d3d476c86d27</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-22 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>44363681280</size>
</item><item>
<title>March 2020 Public Data File from Crossref</title>
<category>Dataset</category>
<infohash>0c6c3fbfdc13f0169b561d29354ea8b188eb9d63</infohash>
<guid>https://academictorrents.com/details/0c6c3fbfdc13f0169b561d29354ea8b188eb9d63</guid>
<link>https://academictorrents.com/details/0c6c3fbfdc13f0169b561d29354ea8b188eb9d63</link>
<description>A data file of the public elements from Crossref’s 112.5 million metadata records (in JSON format). Note that this Crossref metadata is always openly available. The difference here is that we’ve done the time-saving work of putting all of the records registered through March 2020 into one file for download. To keep this metadata current, you can access new records via our public API at: https://api.crossref.org And, if you do use our API, we encourage you to read the section of the documentation on "etiquette". That is, how to use the API without making it impossible for others to use.</description>
<size>67018840098</size>
</item><item>
<title>COVID-19 image dataset collection (volumes folder) March 30th 2020</title>
<category>Dataset</category>
<infohash>136ffddd0959108becb2b3a86630bec049fcb0ff</infohash>
<guid>https://academictorrents.com/details/136ffddd0959108becb2b3a86630bec049fcb0ff</guid>
<link>https://academictorrents.com/details/136ffddd0959108becb2b3a86630bec049fcb0ff</link>
<description>Cite: https://arxiv.org/abs/2003.11597 Joseph Paul Cohen and Paul Morrison and Lan Dao COVID-19 image data collection, arXiv:2003.11597, 2020 https://github.com/ieee8023/covid-chestxray-dataset</description>
<size>1105126768</size>
</item><item>
<title>Deep Learning Face Attributes in the Wild</title>
<category>Dataset</category>
<infohash>51ebeaabf2d9781d6c000cf3e23c46cd4ac1e425</infohash>
<guid>https://academictorrents.com/details/51ebeaabf2d9781d6c000cf3e23c46cd4ac1e425</guid>
<link>https://academictorrents.com/details/51ebeaabf2d9781d6c000cf3e23c46cd4ac1e425</link>
<description># Abstract Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts. # Dataset *CelebFaces Attributes Dataset (CelebA)* is a large-scale face attributes dataset with more than *200K* celebrity images, each with *40* attribute annotations. The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotations, including - *10,177* number of *identities*, - *202,599* number of *face images*, and - *5 landmark locations, 40 binary attributes* annotations per image. The dataset can be employed as the training and test sets for the following computer vision tasks: face attribute recognition, face detection, face landmark (or facial part) localization and face synthesis.</description>
<size>29622926999</size>
</item><item>
<title>ph-biodata.2020-03-20.jsonl</title>
<category>Dataset</category>
<infohash>54a3c1c6db43c7623cddbb2ebec088d5ddad8030</infohash>
<guid>https://academictorrents.com/details/54a3c1c6db43c7623cddbb2ebec088d5ddad8030</guid>
<link>https://academictorrents.com/details/54a3c1c6db43c7623cddbb2ebec088d5ddad8030</link>
<description>Data extracted on 2020-03-20 from Pornhub.com. 99,966 profiles are formatted as JSONL.</description>
<size>42862625</size>
</item><item>
<title>RSNA Pneumonia Detection Challenge (JPG files)</title>
<category>Dataset</category>
<infohash>95588a735c9ae4d123f3ca408e56570409bcf2a9</infohash>
<guid>https://academictorrents.com/details/95588a735c9ae4d123f3ca408e56570409bcf2a9</guid>
<link>https://academictorrents.com/details/95588a735c9ae4d123f3ca408e56570409bcf2a9</link>
<description>Details from the challenge: ## What am I predicting? In this challenge competitors are predicting whether pneumonia exists in a given image. They do so by predicting bounding boxes around areas of the lung. Samples without bounding boxes are negative and contain no definitive evidence of pneumonia. Samples with bounding boxes indicate evidence of pneumonia. When making predictions, competitors should predict as many bounding boxes as they feel are necessary, in the format: confidence x-min y-min width height There should be only ONE predicted row per image. This row may include multiple bounding boxes. A properly formatted row may look like any of the following. For patientIds with no predicted pneumonia / bounding boxes: 0004cfab-14fd-4e49-80ba-63a80b6bddd6, For patientIds with a single predicted bounding box: 0004cfab-14fd-4e49-80ba-63a80b6bddd6,0.5 0 0 100 100 For patientIds with multiple predicted bounding boxes: 0004cfab-14fd-4e49-80ba-63a80b6bddd6,0.5 0 0 100 100 0.5 0 0 100 100, etc. ## File descriptions     stage_2_train.csv - the training set. Contains patientIds and bounding box / target information. stage_2_detailed_class_info.csv - provides detailed information about the type of positive or negative class for each image.     ## Data fields     patientId _- A patientId. Each patientId corresponds to a unique image. x_ - the upper-left x coordinate of the bounding box. y_ - the upper-left y coordinate of the bounding box. width_ - the width of the bounding box. height_ - the height of the bounding box. Target_ - the binary Target, indicating whether this sample has evidence of pneumonia.</description>
<size>3928285701</size>
</item><item>
<title>RSNA Pneumonia Detection Challenge (DICOM files)</title>
<category>Dataset</category>
<infohash>a0d80e1bb03ef8357d71e058ef9471b4468cd18e</infohash>
<guid>https://academictorrents.com/details/a0d80e1bb03ef8357d71e058ef9471b4468cd18e</guid>
<link>https://academictorrents.com/details/a0d80e1bb03ef8357d71e058ef9471b4468cd18e</link>
<description>Details from the challenge: ## What am I predicting? In this challenge competitors are predicting whether pneumonia exists in a given image. They do so by predicting bounding boxes around areas of the lung. Samples without bounding boxes are negative and contain no definitive evidence of pneumonia. Samples with bounding boxes indicate evidence of pneumonia. When making predictions, competitors should predict as many bounding boxes as they feel are necessary, in the format: confidence x-min y-min width height There should be only ONE predicted row per image. This row may include multiple bounding boxes. A properly formatted row may look like any of the following. For patientIds with no predicted pneumonia / bounding boxes: 0004cfab-14fd-4e49-80ba-63a80b6bddd6, For patientIds with a single predicted bounding box: 0004cfab-14fd-4e49-80ba-63a80b6bddd6,0.5 0 0 100 100 For patientIds with multiple predicted bounding boxes: 0004cfab-14fd-4e49-80ba-63a80b6bddd6,0.5 0 0 100 100 0.5 0 0 100 100, etc. ## File descriptions     stage_2_train.csv - the training set. Contains patientIds and bounding box / target information. stage_2_detailed_class_info.csv - provides detailed information about the type of positive or negative class for each image.     ## Data fields     patientId _- A patientId. Each patientId corresponds to a unique image. x_ - the upper-left x coordinate of the bounding box. y_ - the upper-left y coordinate of the bounding box. width_ - the width of the bounding box. height_ - the height of the bounding box. Target_ - the binary Target, indicating whether this sample has evidence of pneumonia.</description>
<size>3956488926</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 4 CCD 4</title>
<category>Dataset</category>
<infohash>9b427692c60ff6eb2e9dde0c0d05cce115e2c64e</infohash>
<guid>https://academictorrents.com/details/9b427692c60ff6eb2e9dde0c0d05cce115e2c64e</guid>
<link>https://academictorrents.com/details/9b427692c60ff6eb2e9dde0c0d05cce115e2c64e</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 4 CCD 3</title>
<category>Dataset</category>
<infohash>5a89966ee98952f36f32a68db9d5eb08c629b49a</infohash>
<guid>https://academictorrents.com/details/5a89966ee98952f36f32a68db9d5eb08c629b49a</guid>
<link>https://academictorrents.com/details/5a89966ee98952f36f32a68db9d5eb08c629b49a</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 4 CCD 2</title>
<category>Dataset</category>
<infohash>aa50b90ab0c7353a8709c0053184a0c1050d3ed7</infohash>
<guid>https://academictorrents.com/details/aa50b90ab0c7353a8709c0053184a0c1050d3ed7</guid>
<link>https://academictorrents.com/details/aa50b90ab0c7353a8709c0053184a0c1050d3ed7</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 4 CCD 1</title>
<category>Dataset</category>
<infohash>a04af60575aff4c50d36fd2188c33199e7d57a4d</infohash>
<guid>https://academictorrents.com/details/a04af60575aff4c50d36fd2188c33199e7d57a4d</guid>
<link>https://academictorrents.com/details/a04af60575aff4c50d36fd2188c33199e7d57a4d</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 3 CCD 4</title>
<category>Dataset</category>
<infohash>d202fd0f701ea86a0b01e5c453df961dc5a94e6d</infohash>
<guid>https://academictorrents.com/details/d202fd0f701ea86a0b01e5c453df961dc5a94e6d</guid>
<link>https://academictorrents.com/details/d202fd0f701ea86a0b01e5c453df961dc5a94e6d</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 3 CCD 3</title>
<category>Dataset</category>
<infohash>f96a003f0012dac18e868221dd5870f5c7929c87</infohash>
<guid>https://academictorrents.com/details/f96a003f0012dac18e868221dd5870f5c7929c87</guid>
<link>https://academictorrents.com/details/f96a003f0012dac18e868221dd5870f5c7929c87</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 3 CCD 2</title>
<category>Dataset</category>
<infohash>60a9491012a5dbca4b28f5b9fb9a2994e60e5985</infohash>
<guid>https://academictorrents.com/details/60a9491012a5dbca4b28f5b9fb9a2994e60e5985</guid>
<link>https://academictorrents.com/details/60a9491012a5dbca4b28f5b9fb9a2994e60e5985</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 3 CCD 1</title>
<category>Dataset</category>
<infohash>bd8d8d9a3f942fc76b33129746babedc9755e0f9</infohash>
<guid>https://academictorrents.com/details/bd8d8d9a3f942fc76b33129746babedc9755e0f9</guid>
<link>https://academictorrents.com/details/bd8d8d9a3f942fc76b33129746babedc9755e0f9</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 2 CCD 4</title>
<category>Dataset</category>
<infohash>d76cd698ac3427b21814d998186f310068a79be6</infohash>
<guid>https://academictorrents.com/details/d76cd698ac3427b21814d998186f310068a79be6</guid>
<link>https://academictorrents.com/details/d76cd698ac3427b21814d998186f310068a79be6</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 2 CCD 3</title>
<category>Dataset</category>
<infohash>741c5c7408d0ddc03e433d8c2090dbbbc57f329a</infohash>
<guid>https://academictorrents.com/details/741c5c7408d0ddc03e433d8c2090dbbbc57f329a</guid>
<link>https://academictorrents.com/details/741c5c7408d0ddc03e433d8c2090dbbbc57f329a</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 2 CCD 2</title>
<category>Dataset</category>
<infohash>21a02a9ab57a1c2b4ff29feb02c17d911f4b821e</infohash>
<guid>https://academictorrents.com/details/21a02a9ab57a1c2b4ff29feb02c17d911f4b821e</guid>
<link>https://academictorrents.com/details/21a02a9ab57a1c2b4ff29feb02c17d911f4b821e</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 2 CCD 1</title>
<category>Dataset</category>
<infohash>12572882ca2d7deb013eabca908e43accd191f5c</infohash>
<guid>https://academictorrents.com/details/12572882ca2d7deb013eabca908e43accd191f5c</guid>
<link>https://academictorrents.com/details/12572882ca2d7deb013eabca908e43accd191f5c</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 1 CCD 4</title>
<category>Dataset</category>
<infohash>03e8b761ed9c43c4cf2d1ec0953d2dfd4d32175c</infohash>
<guid>https://academictorrents.com/details/03e8b761ed9c43c4cf2d1ec0953d2dfd4d32175c</guid>
<link>https://academictorrents.com/details/03e8b761ed9c43c4cf2d1ec0953d2dfd4d32175c</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003265920</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 1 CCD 3</title>
<category>Dataset</category>
<infohash>efd7520a0c130cdf414f2f6eeb3f6fa9bf49739f</infohash>
<guid>https://academictorrents.com/details/efd7520a0c130cdf414f2f6eeb3f6fa9bf49739f</guid>
<link>https://academictorrents.com/details/efd7520a0c130cdf414f2f6eeb3f6fa9bf49739f</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003300480</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 1 CCD 2</title>
<category>Dataset</category>
<infohash>99f425da3595f124ce35105e0a5afcedebf0a0e8</infohash>
<guid>https://academictorrents.com/details/99f425da3595f124ce35105e0a5afcedebf0a0e8</guid>
<link>https://academictorrents.com/details/99f425da3595f124ce35105e0a5afcedebf0a0e8</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 21 Calibrated FFI - Camera 1 CCD 1</title>
<category>Dataset</category>
<infohash>83aaaa771b583ed7b9a892e79d379f1fb67a0f2a</infohash>
<guid>https://academictorrents.com/details/83aaaa771b583ed7b9a892e79d379f1fb67a0f2a</guid>
<link>https://academictorrents.com/details/83aaaa771b583ed7b9a892e79d379f1fb67a0f2a</link>
<description>Mission objectives - The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview - TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline - The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents - The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-21 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments - We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>45003519360</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 4 CCD 4</title>
<category>Dataset</category>
<infohash>16d2d3c7a49a7dc0a8330172f0f509199cc59645</infohash>
<guid>https://academictorrents.com/details/16d2d3c7a49a7dc0a8330172f0f509199cc59645</guid>
<link>https://academictorrents.com/details/16d2d3c7a49a7dc0a8330172f0f509199cc59645</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42195228480</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 4 CCD 3</title>
<category>Dataset</category>
<infohash>fc51444c8f7adc090996299c986337cd67e08f96</infohash>
<guid>https://academictorrents.com/details/fc51444c8f7adc090996299c986337cd67e08f96</guid>
<link>https://academictorrents.com/details/fc51444c8f7adc090996299c986337cd67e08f96</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 4 CCD 2</title>
<category>Dataset</category>
<infohash>d06f218b81f59a162ad86f23e2079d87a3961283</infohash>
<guid>https://academictorrents.com/details/d06f218b81f59a162ad86f23e2079d87a3961283</guid>
<link>https://academictorrents.com/details/d06f218b81f59a162ad86f23e2079d87a3961283</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 4 CCD 1</title>
<category>Dataset</category>
<infohash>681d1440cc6ccf2a8351d0fa2b25f0850e806baa</infohash>
<guid>https://academictorrents.com/details/681d1440cc6ccf2a8351d0fa2b25f0850e806baa</guid>
<link>https://academictorrents.com/details/681d1440cc6ccf2a8351d0fa2b25f0850e806baa</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42216855360</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 3 CCD 4</title>
<category>Dataset</category>
<infohash>367b293cac3428f8d71e88a7815358ba6a1bc7c2</infohash>
<guid>https://academictorrents.com/details/367b293cac3428f8d71e88a7815358ba6a1bc7c2</guid>
<link>https://academictorrents.com/details/367b293cac3428f8d71e88a7815358ba6a1bc7c2</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 3 CCD 3</title>
<category>Dataset</category>
<infohash>3498c5cf993bbcba6056be4d4f67b802055392c4</infohash>
<guid>https://academictorrents.com/details/3498c5cf993bbcba6056be4d4f67b802055392c4</guid>
<link>https://academictorrents.com/details/3498c5cf993bbcba6056be4d4f67b802055392c4</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42195270897</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 3 CCD 2</title>
<category>Dataset</category>
<infohash>3b3f683ac08e74f66c496842168cb9cb8be0bd38</infohash>
<guid>https://academictorrents.com/details/3b3f683ac08e74f66c496842168cb9cb8be0bd38</guid>
<link>https://academictorrents.com/details/3b3f683ac08e74f66c496842168cb9cb8be0bd38</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 3 CCD 1</title>
<category>Dataset</category>
<infohash>86068e5a26ac77efb15cf17a24e58ff27f42757c</infohash>
<guid>https://academictorrents.com/details/86068e5a26ac77efb15cf17a24e58ff27f42757c</guid>
<link>https://academictorrents.com/details/86068e5a26ac77efb15cf17a24e58ff27f42757c</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42228913984</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 2 CCD 4</title>
<category>Dataset</category>
<infohash>f40d61eb8705c98aedd420343e306a6d04048b2f</infohash>
<guid>https://academictorrents.com/details/f40d61eb8705c98aedd420343e306a6d04048b2f</guid>
<link>https://academictorrents.com/details/f40d61eb8705c98aedd420343e306a6d04048b2f</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42199079328</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 2 CCD 3</title>
<category>Dataset</category>
<infohash>ba48c464057107e6126a2cb47d930d1da17b9430</infohash>
<guid>https://academictorrents.com/details/ba48c464057107e6126a2cb47d930d1da17b9430</guid>
<link>https://academictorrents.com/details/ba48c464057107e6126a2cb47d930d1da17b9430</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 2 CCD 2</title>
<category>Dataset</category>
<infohash>423e3e7af703f66714c2ab1ba1f4929b6b32c1ce</infohash>
<guid>https://academictorrents.com/details/423e3e7af703f66714c2ab1ba1f4929b6b32c1ce</guid>
<link>https://academictorrents.com/details/423e3e7af703f66714c2ab1ba1f4929b6b32c1ce</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 2 CCD 1</title>
<category>Dataset</category>
<infohash>55ddb72e81275479c5050bcfa512f1622b4bfc9e</infohash>
<guid>https://academictorrents.com/details/55ddb72e81275479c5050bcfa512f1622b4bfc9e</guid>
<link>https://academictorrents.com/details/55ddb72e81275479c5050bcfa512f1622b4bfc9e</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 1 CCD 4</title>
<category>Dataset</category>
<infohash>fbab08ba2b7a59bb372a502abe973a0cc037641e</infohash>
<guid>https://academictorrents.com/details/fbab08ba2b7a59bb372a502abe973a0cc037641e</guid>
<link>https://academictorrents.com/details/fbab08ba2b7a59bb372a502abe973a0cc037641e</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230615040</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 1 CCD 3</title>
<category>Dataset</category>
<infohash>6d9cad713fd9765a568b77aefea6c27c84b2da8e</infohash>
<guid>https://academictorrents.com/details/6d9cad713fd9765a568b77aefea6c27c84b2da8e</guid>
<link>https://academictorrents.com/details/6d9cad713fd9765a568b77aefea6c27c84b2da8e</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42230776320</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 1 CCD 2</title>
<category>Dataset</category>
<infohash>af2751194f41956f10d5cc4ffc6fff77fa9eeae0</infohash>
<guid>https://academictorrents.com/details/af2751194f41956f10d5cc4ffc6fff77fa9eeae0</guid>
<link>https://academictorrents.com/details/af2751194f41956f10d5cc4ffc6fff77fa9eeae0</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42199684928</size>
</item><item>
<title>TESS Sector 20 Calibrated FFI - Camera 1 CCD 1</title>
<category>Dataset</category>
<infohash>c6c5e944cd30db1760fbd8b7dfbb19eeead54e24</infohash>
<guid>https://academictorrents.com/details/c6c5e944cd30db1760fbd8b7dfbb19eeead54e24</guid>
<link>https://academictorrents.com/details/c6c5e944cd30db1760fbd8b7dfbb19eeead54e24</link>
<description>Mission objectives The Transiting Exoplanet Survey Satellite (TESS) is a NASA-sponsored Astrophysics Explorer-class mission that is performing a near all-sky survey to search for planets transiting nearby stars. The primary goal of TESS is to discover planets smaller than Neptune that transit stars bright enough to enable follow-up spectroscopic observations that can provide planet masses and atmospheric compositions. Overview TESS launched on April 18, 2018 and after a series of maneuvers was placed in a highly-elliptical 13.7 day orbit around the Earth. TESS began regular science operations on July 25, 2018. In the 2-year prime mission, TESS monitors over 200,000 main-sequence dwarf stars with four wide-field optical CCD cameras to detect periodic drops in brightness caused by planetary transits. TESS obtains full-frame images (FFIs) of the entire, four camera field-of-view (24 x 96 degrees) at a cadence of 30 minutes to facilitate additional science. TESS data processing pipeline The TESS data processing pipeline is currently being developed by the Science Processing Operations Center (SPOC) at NASA Ames Research Center and builds on the legacy of the Kepler data processing pipeline. Further information regarding this product is available from https://heasarc.gsfc.nasa.gov/docs/tess/documentation.html Contents The data in this torrent is produced by the SPOC at NASA Ames Research Center and contains calibrated full-frame images (FFIs) for an observing sector that spans roughly 27 days.  Details about this particular sector can be obtained from https://tess.mit.edu/observations/sector-20 The permanent home for this dataset is through the official archive for TESS mission data products at the Mikulski Archive for Space Telescopes (MAST) which is hosted at the Space Telescope Science Institute (STScI) at the following url https://archive.stsci.edu/tess/ Providing TESS data through torrent is currently being done on a trial basis through the TESS  Science Office (TSO) at MIT in order to determine the community interest in this alternative.  It is subject to cancellation at any time. The current plan is to seed the two most recent TESS observing sectors.  Observing sectors older than the two most recent will no longer be seeded by the TSO. Each observing sector is broken down into 16 separate torrents; one torrent for each camera and ccd combination. If you are interested in data for just a few targets, then we encourage you to use the following tools to identify which camera and ccd your targets appear on https://heasarc.gsfc.nasa.gov/cgi-bin/tess/webtess/wtv.py https://github.com/christopherburke/tess-point This will save one from having to download all the data for a sector. General  information regarding the TESS Mission can be obtained at https://tess.mit.edu https://heasarc.gsfc.nasa.gov/docs/tess/ https://archive.stsci.edu/tess/ Acknowledgments We request that scientific publications using data obtained from the TESS project include one of the following acknowledgments: This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program.</description>
<size>42195228480</size>
</item><item>
<title>Learn to Play Songs by Ear, Music theory and Ear Training</title>
<category>Course</category>
<infohash>58e572db539b43ef29a7410403d53f0848f3ef6f</infohash>
<guid>https://academictorrents.com/details/58e572db539b43ef29a7410403d53f0848f3ef6f</guid>
<link>https://academictorrents.com/details/58e572db539b43ef29a7410403d53f0848f3ef6f</link>
<description>Learn to play songs by Ear. Learn music theory and ear training Watch the tutorial:</description>
<size>230591365</size>
</item><item>
<title>Pediatric Chest X-ray Pneumonia (Bacterial vs Viral vs Normal) Dataset</title>
<category>Dataset</category>
<infohash>951f829a8eeb4d2839c4a535db95078a9175010b</infohash>
<guid>https://academictorrents.com/details/951f829a8eeb4d2839c4a535db95078a9175010b</guid>
<link>https://academictorrents.com/details/951f829a8eeb4d2839c4a535db95078a9175010b</link>
<description>The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal). Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care. For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert. https://i.imgur.com/zRBD2Js.png Figure S6. Illustrative Examples of Chest X-Rays in Patients with Pneumonia, Related to Figure 6 The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia (middle) typically exhibits a focal lobar consolidation, in this case in the right upper lobe (white arrows), whereas viral pneumonia (right) manifests with a more diffuse ‘‘interstitial’’ pattern in both lungs. http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5 ## Acknowledgements Data: https://data.mendeley.com/datasets/rscbjbr9sj/2 License: CC BY 4.0 Citation: http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5</description>
<size>1236482806</size>
</item><item>
<title>Medical Handbook for Limited Resource Settings</title>
<category>Course</category>
<infohash>08078e0894099530913e9335bdbc804f42f6e872</infohash>
<guid>https://academictorrents.com/details/08078e0894099530913e9335bdbc804f42f6e872</guid>
<link>https://academictorrents.com/details/08078e0894099530913e9335bdbc804f42f6e872</link>
<description>When one practices medicine in their home country there are standards of care and guidelines often established by societies and organizations. Many look to the World Health Organization when practicing abroad and in limited resource countries. Many WHO guidelines can be found online such as the WHO Model Prescribing Information: Drugs used in Bacterial Infections (http://apps.who.int/medicinedocs/en/d/Js5406e/9.2.html). It is important and can create for a better dynamic when interacting with local providers to be aware of any country specific guidelines that may be created and published for a particular country. These may be created by societies or organizations in a particular country or even by the country’s Health Ministry (e.g., Uganda Clinical Guidelines https://www.health.go.ug/ content/uganda-clinical-guidelines-2016 ). Two major features of practicing in limited resource settings are usually a limited number of available diagnostic tests and a limited medication formulary. Included in this handbook are examples of basic tests that might be available as well as a few examples of basic formularies. In many countries there are ‘required’ medications and these may be stocked despite no obvious local need. Selecting medications and management algorithms often involve complex decisions based on finite and often limited pharmacy budgets and the local availability and cost of medications. Deciding to stock, prescribe and dispense the latest name brand antihypertensive for an elderly individual with a slightly elevated blood pressure may result in not having malaria medications for a critically ill child. The following executive summaries are a starting point for the understanding, diagnosis and treatment of common presentations one is likely to encounter in low resource setting as well as specific diseases. The final decisions regarding how these presentations and diseases are approached and managed should however be based on the judgement of a medical profession familiar with the local epidemiology, customs, and standards of care in a particular region. Several aspects of this guide will serve only as the foundation of further judgement on the part of the clinician. As far as dosing recommendations, if the dosing is 3x/day a clinician will need to communicate with the patient or caregiver regarding if this is 3 times per day with meals, every 8 hours or is  maximum per day. Not only does this aspect of care require judgement but it also is greatly improved by understanding the cultural context. In certain cultures, 3 meals are customary while in others a morning and evening meal are the norm with a late morning break for tea. Another area where cultural sensitivity is critical are issues surrounding family planning. If one administers an intramuscular contraceptive in a women’s shoulder below the area covered by the shirt sleeve and then affixes a band aid over the site the patient’s privacy may be compromised with negative consequences. Recommending Co-trimoxazole (trimethoprim-sulfa) to a patient with a urinary tract infection in Sub-Saharan Africa might seem to make perfect sense while a patient might be confused and upset as the widespread use of this as a prophylactic medication in the HIV-infected population has led to this medication being viewed as an ‘HIV medication’. Visit https://www.parasiteswithoutborders.com/medical-handbook-for-limited-resource-settings/</description>
<size>15022907366</size>
</item><item>
<title>Misbehaviour Prediction for Autonomous Driving Systems</title>
<category>Dataset</category>
<infohash>221c3c71ac0b09b1bb31698534d50168dc394cc7</infohash>
<guid>https://academictorrents.com/details/221c3c71ac0b09b1bb31698534d50168dc394cc7</guid>
<link>https://academictorrents.com/details/221c3c71ac0b09b1bb31698534d50168dc394cc7</link>
<description>These are reproduction artifacts of the paper "Misbehaviour Prediction for Autonomous Driving Systems" at ICSE 2020. For more information, check out https://github.com/testingautomated-usi/selforacle</description>
<size>8518252988</size>
</item><item>
<title>Identification of Radar Interpretation on Basis of Eye Tracking Data</title>
<category>Dataset</category>
<infohash>5d4d305ba8d2c01a0f086c4615902ec3902dbbfa</infohash>
<guid>https://academictorrents.com/details/5d4d305ba8d2c01a0f086c4615902ec3902dbbfa</guid>
<link>https://academictorrents.com/details/5d4d305ba8d2c01a0f086c4615902ec3902dbbfa</link>
<description>Radar observation is an essential component of maritime navigation. There is a possibility with a wide range of tasks and activities, that navigator s eye movements are on the radar, but he does not pay enough attention for sufficient interpretation. This increases collision risk. We introduce a algorithmic model, which may differ between an interpretation and no interpretation of a radar video on the basis of eye tracking data. Corresponding data is collected in a study, its design is inspired by real navigation. We extract several features and use machine learning to identify dependencies and impacts to classify eye tracking data. Results of various algorithms and calculations were compared to derive the final model. We conclude that logistic regression with a L-BFGS solver has highest accuracy and recall, can be calculated quickly and is easy to interpret. This torrent contains the acquired anonymous training and test data, as well as a description of our trained model in R.</description>
<size>255</size>
</item><item>
<title>Event GWTC 1</title>
<category>Dataset</category>
<infohash>cf7efcf33370e24985ce883532c069cc43176d1b</infohash>
<guid>https://academictorrents.com/details/cf7efcf33370e24985ce883532c069cc43176d1b</guid>
<link>https://academictorrents.com/details/cf7efcf33370e24985ce883532c069cc43176d1b</link>
<description>We present the results from three gravitational-wave searches for coalescing compact binaries with component masses above 1 M⊙ during the first and second observing runs of the Advanced gravitational-wave detector network. During the first observing run (O1), from September 12th, 2015 to January 19th, 2016, gravitational waves from three binary black hole mergers were detected. The second observing run (O2), which ran from November 30th, 2016 to August 25th, 2017, saw the first detection of gravitational waves from a binary neutron star inspiral, in addition to the observation of gravitational waves from a total of seven binary black hole mergers, four of which we report here for the first time: GW170729, GW170809, GW170818 and GW170823. For all significant gravitational-wave events, we provide estimates of the source properties. The detected binary black holes have total masses between 18.6+3.2−0.7M⊙, and 84.4+15.8−11.1M⊙, nd range in distance between 320+120−110Mpc and 2840+1400−1360 Mpc. No neutron star - black hole mergers were detected. In addition to highly significant gravitational-wave events, we also provide a list of marginal event candidates with an estimated false alarm rate less than 1 per 30 days. From these results over the first two observing runs, which include approximately one gravitational-wave detection per 15 days of data searched, we infer merger rates at the 90% confidence intervals of 110−3840Gpc−3y−1for binary neutron stars and 9.7−101 Gpc-3y−1 for binary black holes assuming fixed population distributions, and determine a neutron star - black hole merger rate 90% upper limit of 610Gpc−3y−1 .</description>
<size>21540419428</size>
</item><item>
<title>Localization of short duration gravitational-wave transients with the early advanced LIGO and Virgo detectors</title>
<category>Dataset</category>
<infohash>8904671dffa9d296edcd095caca519c678c240f1</infohash>
<guid>https://academictorrents.com/details/8904671dffa9d296edcd095caca519c678c240f1</guid>
<link>https://academictorrents.com/details/8904671dffa9d296edcd095caca519c678c240f1</link>
<description>The Laser Interferometer Gravitational wave Observatory (LIGO) and Virgo, advanced ground-based gravitational-wave detectors, will begin collecting science data in 2015. With first detections expected to follow, it is important to quantify how well generic gravitational-wave transients can be localized on the sky. This is crucial for correctly identifying electromagnetic counterparts as well as understanding gravitational-wave physics and source populations. We present a study of sky localization capabilities for two search and parameter estimation algorithms: \emphcoherent WaveBurst, a constrained likelihood algorithm operating in close to real-time, and \emphLALInferenceBurst, a Markov chain Monte Carlo parameter estimation algorithm developed to recover generic transient signals with latency of a few hours. Furthermore, we focus on the first few years of the advanced detector era, when we expect to only have two (2015) and later three (2016) operational detectors, all below design sensitivity. These detector configurations can produce significantly different sky localizations, which we quantify in detail. We observe a clear improvement in localization of the average detected signal when progressing from two-detector to three-detector networks, as expected. Although localization depends on the waveform morphology, approximately 50% of detected signals would be imaged after observing 100-200 degrees  in 2015 and 60-110 degrees in 2016, although knowledge of the waveform can reduce this to as little as 22 degrees This is the first comprehensive study on sky localization capabilities for generic transients of the early network of advanced LIGO and Virgo detectors, including the early LIGO-only two-detector configuration.</description>
<size>31999671548</size>
</item><item>
<title>Process of Parenting</title>
<category>Paper</category>
<infohash>e67bda94eaadaec64adc61f02ea7bdcf0529bd08</infohash>
<guid>https://academictorrents.com/details/e67bda94eaadaec64adc61f02ea7bdcf0529bd08</guid>
<link>https://academictorrents.com/details/e67bda94eaadaec64adc61f02ea7bdcf0529bd08</link>
<description/>
<size>15813698</size>
</item><item>
<title>Montmorency Dataset</title>
<category>Dataset</category>
<infohash>cf39b5c4285f20c3539cbe9f37f6e04cfde10afa</infohash>
<guid>https://academictorrents.com/details/cf39b5c4285f20c3539cbe9f37f6e04cfde10afa</guid>
<link>https://academictorrents.com/details/cf39b5c4285f20c3539cbe9f37f6e04cfde10afa</link>
<description>Forestry is a major industry in many parts of the world, yet this potential domain of application area has been overlooked by the robotics community. For instance, forest inventory, a cornerstone of efficient and sustainable forestry, is still traditionally performed manually by qualified professionals. The lack of automation in this particular task, consisting chiefly of measuring tree attributes, limits its speed, and therefore the area that can be economically covered. To this effect, we propose to use recent advancements in 3D mapping approaches in forests to automatically measure tree diameters from mobile robot observations. While previous studies showed the potential for such technology, they lacked a rigorous analysis of diameter estimation methods in challenging and large-scale forest environments. Here, we validated multiple diameter estimation methods, including two novel ones, in a new publicly-available dataset which includes four different forest sites, 11 trajectories, totaling 1458 tree observations and 14 000 m². From our extensive validation, we concluded that our mapping method is usable in the context of automated forest inventory, with our best diameter estimation method yielding a root mean square error of 3.45 cm for our whole dataset, and 2.04 cm in ideal conditions consisting of mature forest with well spaced trees. Furthermore, we release this dataset to the public, to spur further research in robotic forest inventories. Finally, stemming from this large-scale experiment, we provide recommendations for future deployments of mobile robots in a forestry context.</description>
<size>331254679180</size>
</item><item>
<title>MineRL: A Large-Scale Dataset of Minecraft Demonstrations</title>
<category>Dataset</category>
<infohash>b37b88b9cfaf0ed0c371da7d53c22c284c35c089</infohash>
<guid>https://academictorrents.com/details/b37b88b9cfaf0ed0c371da7d53c22c284c35c089</guid>
<link>https://academictorrents.com/details/b37b88b9cfaf0ed0c371da7d53c22c284c35c089</link>
<description>The sample inefficiency of standard deep reinforcement learning methods precludes their application to many real-world problems. Methods which leverage human demonstrations require fewer samples but have been researched less. As demonstrated in the computer vision and natural language processing communities, large-scale datasets have the capacity to facilitate research by serving as an experimental and benchmarking platform for new methods. However, existing datasets compatible with reinforcement learning simulators do not have sufficient scale, structure, and quality to enable the further development and evaluation of methods focused on using human examples. Therefore, we introduce a comprehensive, large-scale, simulator-paired dataset of human demonstrations: MineRL. The dataset consists of over 60 million automatically annotated state-action pairs across a variety of related tasks in Minecraft, a dynamic, 3D, open-world environment. We present a novel data collection scheme which allows for the ongoing introduction of new tasks and the gathering of complete state information suitable for a variety of methods. We demonstrate the hierarchality, diversity, and scale of the MineRL dataset. Further, we show the difficulty of the Minecraft domain along with the potential of MineRL in developing techniques to solve key research challenges within it.</description>
<size>31820513429</size>
</item><item>
<title>AVSpeech: Large-scale Audio-Visual Speech Dataset </title>
<category>Dataset</category>
<infohash>b078815ca447a3e4d17e8a2a34f13183ec5dec41</infohash>
<guid>https://academictorrents.com/details/b078815ca447a3e4d17e8a2a34f13183ec5dec41</guid>
<link>https://academictorrents.com/details/b078815ca447a3e4d17e8a2a34f13183ec5dec41</link>
<description>AVSpeech is a new, large-scale audio-visual dataset comprising speech video clips with no interfering background noises. The segments are 3-10 seconds long, and in each clip the audible sound in the soundtrack belongs to a single speaking person, visible in the video. In total, the dataset contains roughly 4700 hours* of video segments, from a total of 290k YouTube videos, spanning a wide variety of people, languages and face poses. For more details on how we created the dataset see our paper, Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation (https://arxiv.org/abs/1804.03619). * UPLOADER S NOTE: This dataset contains 3000 hours of video segments and not the entire 4700 hours. 1700 hours were not included as some no longer existed on youtube, had a copyright violation, not available in the United States, or was of poor quality. Over 1 million segments are included in this torrent, each between 3 - 10 seconds, and in 720p resolution. See README on how to use this dataset</description>
<size>1503015135350</size>
</item><item>
<title>Maplestory Character Image Dataset</title>
<category>Dataset</category>
<infohash>34263e708a9953126633af2405dafe8f3185c994</infohash>
<guid>https://academictorrents.com/details/34263e708a9953126633af2405dafe8f3185c994</guid>
<link>https://academictorrents.com/details/34263e708a9953126633af2405dafe8f3185c994</link>
<description>This is a dataset containing  160,053 Unique Images of real Maplestory players. Could be used to create a GAN or for other applications. If you make something cool with this, I would love to hear about it! :D Examples: https://i.imgur.com/TQqRSuj.jpg</description>
<size>827789824</size>
</item><item>
<title>Data Science Bowl 2017 Lung Cancer Detection (DSB3) </title>
<category>Dataset</category>
<infohash>015f31a94c600256868be155358dc114157507fc</infohash>
<guid>https://academictorrents.com/details/015f31a94c600256868be155358dc114157507fc</guid>
<link>https://academictorrents.com/details/015f31a94c600256868be155358dc114157507fc</link>
<description>In this dataset, you are given over a thousand low-dose CT images from high-risk patients in DICOM format. Each image contains a series with multiple axial slices of the chest cavity. Each image has a variable number of 2D slices, which can vary based on the machine taking the scan and patient. The DICOM files have a header that contains the necessary information about the patient id, as well as scan parameters such as the slice thickness.     stage1.7z: 285380 dcm files stage2.7z: 186160 dcm files stage1_labels.csv: 1595 labels</description>
<size>164052552818</size>
</item><item>
<title>dHCP 2nd data release -- fMRI pipeline intermediate</title>
<category>Dataset</category>
<infohash>b469546454e43f22804bedbb8fe9e43be2a8d95c</infohash>
<guid>https://academictorrents.com/details/b469546454e43f22804bedbb8fe9e43be2a8d95c</guid>
<link>https://academictorrents.com/details/b469546454e43f22804bedbb8fe9e43be2a8d95c</link>
<description/>
<size>375415939154</size>
</item><item>
<title>Cross-Modal Learning Filters for RGB-Neuromorphic Wormhole Learning</title>
<category>Dataset</category>
<infohash>e07e134eb9149983583d6aab3f033a90769bbc60</infohash>
<guid>https://academictorrents.com/details/e07e134eb9149983583d6aab3f033a90769bbc60</guid>
<link>https://academictorrents.com/details/e07e134eb9149983583d6aab3f033a90769bbc60</link>
<description/>
<size>351935869961</size>
</item><item>
<title>KEWER embeddings</title>
<category>Dataset</category>
<infohash>4778f904ca10f059eaaf27bdd61f7f7fc93abc6e</infohash>
<guid>https://academictorrents.com/details/4778f904ca10f059eaaf27bdd61f7f7fc93abc6e</guid>
<link>https://academictorrents.com/details/4778f904ca10f059eaaf27bdd61f7f7fc93abc6e</link>
<description>KEWER embeddings for ECIR 2020 paper "Joint Word and Entity Embeddings for Entity Retrieval from Knowledge Graph"</description>
<size>48029267284</size>
</item><item>
<title>02JSKOV 2019-2020 Human Computer Interaction</title>
<category>Course</category>
<infohash>9a6996552d026778891d2d96c953b5d9ccd83ab1</infohash>
<guid>https://academictorrents.com/details/9a6996552d026778891d2d96c953b5d9ccd83ab1</guid>
<link>https://academictorrents.com/details/9a6996552d026778891d2d96c953b5d9ccd83ab1</link>
<description>Video Lectures of the course "Human Computer Interaction" taken at Politecnico di Torino (Italy), in year 2019/2020. Teachers: Fulvio Corno, Luigi De Russis Course information: http://bit.ly/polito-hci Exercise and code repository: https://github.com/polito-hci-2019</description>
<size>5468598273</size>
</item><item>
<title>Machine learning-based dynamical seasonal prediction of summer rainfall in China</title>
<category>Dataset</category>
<infohash>161aed41be78597bbae1a12d56f44987a1336628</infohash>
<guid>https://academictorrents.com/details/161aed41be78597bbae1a12d56f44987a1336628</guid>
<link>https://academictorrents.com/details/161aed41be78597bbae1a12d56f44987a1336628</link>
<description>This is datasets shared with scientific article "Machine learning-based dynamical seasonal prediction of summer rainfall in China" for science reproducibility</description>
<size>898750947</size>
</item><item>
<title> Hofeller Files - Stephanie's Copy</title>
<category>Dataset</category>
<infohash>30e27c1d63e8ee36d42457e700e4c1a268718885</infohash>
<guid>https://academictorrents.com/details/30e27c1d63e8ee36d42457e700e4c1a268718885</guid>
<link>https://academictorrents.com/details/30e27c1d63e8ee36d42457e700e4c1a268718885</link>
<description>From: https://drive.google.com/drive/folders/1Pw3lc2QPJ-eEeHszCdcfjbe3uZC1DbKz and: https://www.thehofellerfiles.com/ More info: https://www.npr.org/2020/01/05/785672201/deceased-gop-strategists-daughter-makes-files-public-that-republicans-wanted-sea Copied Jan 5th, 2020: More than a year after his death, a cache of computer files saved on the hard drives of Thomas Hofeller, a prominent Republican redistricting strategist, is becoming public. Republican state lawmakers in North Carolina fought in court to keep copies of these maps, spreadsheets and other documents from entering the public record. But some files have already come to light in recent months through court filings and news reports. They have been cited as evidence of gerrymandering that got political maps thrown out in North Carolina, and they have raised questions about Hofeller s role in the Trump administration s failed push for a census citizenship question. Now more of the files are available online through a website called The Hofeller Files, where Hofeller s daughter, Stephanie Hofeller, published a link to her copy of the files on Sunday after first announcing her plans in a tweet last month. "These are matters that concern the people and their franchise and their access to resources. This is, therefore, the property of the people," Hofeller told NPR. "I won t be satisfied that we the people have found everything until we the people have had a look at it in its entirety." "A hunch that maybe something was wrong" Her decision to put the files online herself is just the latest twist in a series of one astonishing event after another. It had been more than four years since Stephanie had spoken to her father after a family dispute involving the custody of her children landed in court. But on the last day of September in 2018, she "had a hunch that maybe something was wrong," according to her testimony for a lawsuit deposition. After his death in 2018, Thomas Hofeller s daughter found hard drives filled with the GOP redistricting strategist s files. Among them was a study in which he concluded that adding a citizenship question to census forms would be "advantageous to Republicans and Non-Hispanic Whites." C-SPAN via AP Sitting in her car parked outside a convenience store in Kentucky, she used her phone to search online for her father s name and found an obituary for Thomas Hofeller, confirming that he had died at the age of 75 more than a month earlier in August. Stephanie then reconnected with her mother, Kathleen, and visited her parents  apartment in North Carolina, where she found four external hard drives and a clear plastic bag containing 18 USB thumb drives in her father s room. Stephanie says her mother encouraged her to take the devices. A treasure trove that led to bombshells It turned out they were filled with photos of Stephanie with her children and other personal items — as well as files from her father s work as a redistricting consultant for Republicans. While looking for an attorney to represent her mother in 2018, Stephanie says she connected with the North Carolina chapter of Common Cause, an advocacy group that had brought a lawsuit against Republican state officials to overturn political maps Thomas Hofeller helped draw. After mentioning the hard drives to Common Cause, Stephanie received a court order to turn them over as potential evidence for the lawsuit. She did so in March after making a copy of some of the files for herself. Since then, the Hofeller files have led to bombshell developments in two major legal battles in the political world. In September, Common Cause won its legal challenge to political maps in North Carolina, where a state court cited some of the files as evidence of gerrymandering designed to unfairly give Republicans an advantage in winning elections and maintaining control of the state legislature. "The Court finds that in many election environments, it is the carefully crafted maps, and not the will of the voters, that dictate the election outcomes in a significant number of legislative districts and, ultimately, the majority control of the General Assembly," a three-judge panel of the Wake County Superior Court wrote in their ruling. Other files have become intertwined in the federal lawsuits over the Trump administration s push to add the now-blocked citizenship question to the 2020 census, raising questions about Thomas Hofeller s role and the administration s true motives. Lawyers with the law firm Arnold &amp; Porter — which represented both Common Cause and some of the citizenship question s challengers — uncovered an unpublished study in which Thomas Hofeller concluded using responses from such a question would be "advantageous to Republicans and Non-Hispanic Whites" when voting districts are redrawn. The revelation came weeks before the U.S. Supreme Court issued its ruling in June, affirming a lower court s decision against the question, which has been permanently blocked from forms for the upcoming national head count. Saving "trade secrets" from being "destroyed" Stephanie says she decided to turn the hard drives over for the North Carolina lawsuit in March and to upload her copy of the files online this week in part to preserve a historical record about her father. "His work is really having a profound effect and has had long before anybody really noticed on a broader level," Stephanie says. "I think from the historical standpoint, this slice of life, this little snapshot is going to prove very valuable." Attorneys for Thomas Hofeller s former company, Geographic Strategies, have been trying to keep sealed copies of certain files that were turned over for the North Carolina case, citing them as "trade secrets," and other proprietary information about the company s work. While that dispute has played out in a state court in recent months, news organizations including The New Yorker, The New York Times and The Intercept have published reports based on copies they obtained of Hofeller s files. "I originally started sharing them with journalists as a direct response to the assertion by the legislative defendants through counsel that they should be destroyed," Stephanie tells NPR, which previously received a copy of the files from her. The files document the wide reach of Thomas Hofeller s work on political maps across the country — including in Arizona, Florida, Maryland, Mississippi, Missouri, Ohio, Tennessee and Virginia, as well as New York s Nassau County and Texas  Galveston and Nueces counties. In a Microsoft Word document last saved in 2015, Thomas Hofeller warned against changing the Census Bureau s policy of including prisoners in the population counts of the areas where they re incarcerated, expressing concern that "the actual effect on reapportionment and redistricting is not clearly known for individual states." Another ironic twist As a longtime strategist for the Republican National Committee, Thomas Hofeller was known for his warnings to keep redistricting work under wraps. "Treat every statement and document as if it was going to appear on the FRONT PAGE of your local newspaper," one of his slides for a 2011 training session for redistricting officials says. "Emails are the tool of the devil." Stephanie says the irony that some of his work files are now out in public is not lost on her. "I don t think he cared all that much to protect these people after he was gone," she adds. While he was alive, politics governed family life for the Hofellers, Stephanie says. Growing up, she remembers her father correcting how she and others would pronounce gerrymandering with a soft G sound. Her father preferred the hard G (as in Gary) in honor of the term s namesake — former U.S. Vice President Elbridge Gerry, who as governor of Massachusetts in 1812 signed into law a political map with a salamander-shaped district that gave the Democratic-Republican party an advantage over the Federalists. Stephanie says her father s stated goal was to use gerrymandering to "create a system wherein the Republican nominee would win." "State legislature, it doesn t matter who votes for what. Congress, it doesn t matter who votes for what. And president, it doesn t matter," she says. Contrary to some people s assumptions given her role in revealing her father s work to perpetuate Republican power, Stephanie says she does not identify as a Democrat, although she has voted for Democratic candidates in the past. "The reason I don t identify as a Democrat is because I m an anarchist," she says. "I don t believe that we re going to really find solutions to the deeper problems of inequality in a system that demands a hierarchy, which is, by definition, unequal." "All the good stuff" During her deposition in May, she testified there may be more files from her father s work to uncover. Before Stephanie arrived at her parents  apartment, her father s business partner, Dale Oldham, had removed a laptop and a desktop computer with Hofeller s work files, Stephanie said her mother told her. "Dale got all the good stuff," Stephanie told attorneys. Oldham has not responded to NPR s requests for comment. As part of proceedings for the North Carolina case, Oldham has argued in court filings that when Thomas Hofeller died, "Geographic Strategies  computer, various files, and numerous backups in Dr. Hofeller s possession" belonged to the company — of whom Oldham is the sole surviving member — and its clients. In November, one of those clients, the Republican National Committee, paid Oldham more than $420,000 for "legal and compliance services" — part of a total of more than $658,000 Oldham has collected from the RNC since May, according to Federal Election Commission filings. Common Cause s attorneys have been unable to get Oldham to share any additional documents. But as part of sanctions proceedings related to the citizenship question lawsuits in New York, plaintiffs  attorneys have asked U.S. District Judge Jesse Furman to allow them to subpoena Oldham, who in 2017 consulted through Hofeller with a then-adviser to the Trump administration on the question, according to an email obtained by the House Oversight and Reform Committee. For her part, Stephanie says she s committed to transparency with the public in case she gets access to any more of her father s files. "If I were to find something," she says, "I would most certainly share it."</description>
<size>43921317160</size>
</item><item>
<title>LC25000 Lung and colon histopathological image dataset</title>
<category>Dataset</category>
<infohash>7a638ed187a6180fd6e464b3666a6ea0499af4af</infohash>
<guid>https://academictorrents.com/details/7a638ed187a6180fd6e464b3666a6ea0499af4af</guid>
<link>https://academictorrents.com/details/7a638ed187a6180fd6e464b3666a6ea0499af4af</link>
<description>LC25000 LUNG AND COLON HISTOPATHOLOGICAL IMAGE DATASET The dataset contains color 25,000 images with 5 classes of 5,000 images each. All images are 768 x 768 pixels in size and are in jpeg file format. Our dataset can be downloaded as a 1.85 GB zip file LC25000.zip. After unzipping, the main folder lung_colon_image_set contains two subfolders: colon_image_sets and lung_image_sets. The subfolder colon_image_sets contains two secondary subfolders: colon_aca subfolder with 5000 images of colon adenocarcinomas and colon_n subfolder with 5000 images of benign colonic tissues. The subfolder lung_image_sets contains three secondary subfolders: lung_aca subfolder with 5000 images of lung adenocarcinomas, lung_scc subfolder with 5000 images of lung squamous cell carcinomas, and lung_n subfolder with 5000 images of benign lung tissues.     File counts ./lung_image_sets/lung_aca :     5000 ./lung_image_sets/lung_n :     5000 ./lung_image_sets/lung_scc :     5000 ./colon_image_sets/colon_n :     5000 ./colon_image_sets/colon_aca :     5000     https://i.imgur.com/aVdT3ks.jpeg https://i.imgur.com/TNhOxXJ.png</description>
<size>1890299770</size>
</item><item>
<title>Email.cz image spam dataset v1</title>
<category>Dataset</category>
<infohash>06f2389082e9c034fa4a73aaee00131a27e388b6</infohash>
<guid>https://academictorrents.com/details/06f2389082e9c034fa4a73aaee00131a27e388b6</guid>
<link>https://academictorrents.com/details/06f2389082e9c034fa4a73aaee00131a27e388b6</link>
<description>The problem with email image spam classification is known from the year 2005. There are several approaches to this task. Lately, those approaches use convolutional neural networks (CNN). We propose a novel approach to the image spam classification task. Our approach is based on CNN and transfer learning, namely Resnet v1 used for semantic feature extraction and one layer Feedforward Neural Network for classification. We have shown that this approach can achieve state-of-the-art performance on publicly available datasets. 99% F1-score on two datasets [dredze 2007, Princeton] and 96% F1-score on the combination of these datasets. Due to the availability of GPUs, this approach may be used for just-in-time classification in anti-spam systems handling huge amounts of emails. We have observed also that mentioned publicly available datasets are no longer representative. We overcame this limitation by using a much richer dataset from a one-week long real traffic of the freemail provider Email.cz. The training data annotation was created by user labeling of the emails. The image spam (and image ham even more) tackles privacy issues. We overcame it by publishing extracted feature vectors with associated classes (instead of images itself). This data does not violate privacy issues. We have published Email.cz image spam dataset v1 via the AcademicTorrents platform and propose a system, which achieves up to 96% F1-score with presented model architecture on this novel dataset. Providing our dataset to the community may help others with solving similar tasks.</description>
<size>2660566545</size>
</item><item>
<title>ImageNet LSVRC 2012 Training Set (lmdb)</title>
<category>Dataset</category>
<infohash>d58437a61c1adf9801df99c6a82960d076cb7312</infohash>
<guid>https://academictorrents.com/details/d58437a61c1adf9801df99c6a82960d076cb7312</guid>
<link>https://academictorrents.com/details/d58437a61c1adf9801df99c6a82960d076cb7312</link>
<description>You have been granted access for non-commercial research/educational use. By accessing the data, you have agreed to the following terms. You (the "Researcher") have requested permission to use the ImageNet database (the "Database") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions: 1. Researcher shall use the Database only for non-commercial research and educational purposes. 2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher s use of the Database, including but not limited to Researcher s use of any copies of copyrighted images that he or she may create from the Database. 4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. 5. Princeton University and Stanford University reserve the right to terminate Researcher s access to the Database at any time. 6. If Researcher is employed by a for-profit, commercial entity, Researcher s employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 7. The law of the State of New Jersey shall apply to all disputes under this agreement.</description>
<size>150723780608</size>
</item><item>
<title>ImageNet LSVRC 2012 Validation Set (lmdb)</title>
<category>Dataset</category>
<infohash>207ebd69f80a3707f035cd91a114466a270e044d</infohash>
<guid>https://academictorrents.com/details/207ebd69f80a3707f035cd91a114466a270e044d</guid>
<link>https://academictorrents.com/details/207ebd69f80a3707f035cd91a114466a270e044d</link>
<description>You have been granted access for non-commercial research/educational use. By accessing the data, you have agreed to the following terms. You (the "Researcher") have requested permission to use the ImageNet database (the "Database") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions: 1. Researcher shall use the Database only for non-commercial research and educational purposes. 2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher s use of the Database, including but not limited to Researcher s use of any copies of copyrighted images that he or she may create from the Database. 4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. 5. Princeton University and Stanford University reserve the right to terminate Researcher s access to the Database at any time. 6. If Researcher is employed by a for-profit, commercial entity, Researcher s employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 7. The law of the State of New Jersey shall apply to all disputes under this agreement.</description>
<size>6855241728</size>
</item><item>
<title>Mid Magazine: An Ecology of Mids Vol. 1 </title>
<category>Paper</category>
<infohash>50e847d3fec03491e86032643c332f43015ac242</infohash>
<guid>https://academictorrents.com/details/50e847d3fec03491e86032643c332f43015ac242</guid>
<link>https://academictorrents.com/details/50e847d3fec03491e86032643c332f43015ac242</link>
<description>Mid Magazine is digital arts magazine centered around nature and ecology. It features a mix by Ariel Zetina (Discwoman) and visual art by Chicago born artists Sam Rolfes, Jermaine Collins, Tara Shafa and Julia Kriegel as well as some London based artists like Kai Whiston. Alongside these paintings and collages there are many poems by Chicagoans like Laura Goldstein, Imama Kay, Kimani Rose, and z. arvind swezy. This content hosted at the Internet Archive at https://archive.org/details/midmag Files may have changed, which prevents torrents from downloading correctly or completely; please check for an updated torrent at https://archive.org/download/midmag/midmag_archive.torrent Note: retrieval usually requires a client that supports webseeding (GetRight style). Note: many Internet Archive torrents contain a  pad file  directory. This directory and the files within it may be erased once retrieval completes. Note: the file midmag_meta.xml contains metadata about this torrent s contents.</description>
<size>143029631</size>
</item><item>
<title>LNDb CT scan dataset (training)</title>
<category>Dataset</category>
<infohash>e3c196b07c8ea94ac5fca872bccf2cc035f4e88d</infohash>
<guid>https://academictorrents.com/details/e3c196b07c8ea94ac5fca872bccf2cc035f4e88d</guid>
<link>https://academictorrents.com/details/e3c196b07c8ea94ac5fca872bccf2cc035f4e88d</link>
<description>The main goal of this challenge is the automatic classification of chest CT scans according to the 2017 Fleischner society pulmonary nodule guidelines for patient follow-up recommendation. The LNDb dataset contains 294 CT scans collected retrospectively at the Centro Hospitalar e Universitário de São João (CHUSJ) in Porto, Portugal between 2016 and 2018. All data was acquired under approval from the CHUSJ Ethical Commitee and was anonymised prior to any analysis to remove personal information except for patient birth year and gender. Further details on patient selection and data acquisition can be consulted on the database description paper. Each CT scan was read by at least one radiologist at CHUSJ to identify pulmonary nodules and other suspicious lesions. A total of 5 radiologists with at least 4 years of experience reading up to 30 CTs per week participated in the annotation process throughout the project. Annotations were performed in a single blinded fashion, i.e. a radiologist would read the scan once and no consensus or review between the radiologists was performed. Each scan was read by at least one radiologist. The instructions for manual annotation were adapted from LIDC-IDRI. Each radiologist identified the following lesions: - nodule ⩾3mm: any lesion considered to be a nodule by the radiologist with greatest in-plane dimension larger or equal to 3mm; - nodule &lt;3mm: any lesion considered to be a nodule by the radiologist with greatest in-plane dimension smaller than 3mm; - non-nodule: any pulmonary lesion considered not to be a nodule by the radiologist, but that contains features which could make it identifiable as a nodule; The annotation process varied for the different categories. Nodules ⩾3mm were segmented and subjectively characterized according to LIDC-IDRI (ratings on subtlety, internal structure, calcification, sphericity, margin, lobulation, spiculation, texture and likelihood of malignancy). For a complete description of these characteristics the reader is referred to McNitt-Gray et al.. For nodules &lt;3mm the nodule centroid was marked and subjective assessment of the nodule s characteristics was performed. For non-nodules, only the lesion centroid was marked. Given that different radiologists may have read the same CT and no consensus review was performed, variability in radiologist annotations is expected. Note that from the 294 CTs of the LNDb dataset, 58 CTs with annotations by at least two radiologists have been withheld for the test set, as well as the corresponding annotations. https://i.imgur.com/MiHSh9c.png</description>
<size>29209516876</size>
</item><item>
<title>Wikipedia_DE_pagelinks_12-12-2019</title>
<category>Dataset</category>
<infohash>006daef68757b6a4eec0d04a04f255205af38e16</infohash>
<guid>https://academictorrents.com/details/006daef68757b6a4eec0d04a04f255205af38e16</guid>
<link>https://academictorrents.com/details/006daef68757b6a4eec0d04a04f255205af38e16</link>
<description>This archive contains informations about links between pages in the German version of Wikipedia (as of 12-12-2019). It can be easily parsed and used for things such as e.g pathfinding algorithms testing, network visualisation. The graph contain 3,917,116 nodes and 112,705,135 edges. Pages titles and redirection tokens are provided as well.</description>
<size>367006423</size>
</item><item>
<title>Illinois DOC labeled faces dataset</title>
<category>Dataset</category>
<infohash>4b9b7e449aa732842aea1a7d4e6413f4507aea99</infohash>
<guid>https://academictorrents.com/details/4b9b7e449aa732842aea1a7d4e6413f4507aea99</guid>
<link>https://academictorrents.com/details/4b9b7e449aa732842aea1a7d4e6413f4507aea99</link>
<description>This is a dataset of prisoner mugshots and associated data (height, weight, etc). The copyright status is public domain, since it s produced by the government, the photographs do not have sufficient artistic merit, and a mere collection of facts aren t copyrightable. The source is the Illinois Dept. of Corrections. In total, there are 68149 entries, of which a few hundred have shoddy data. It s useful for neural network training, since it has pictures from both front and side, and they re (manually) labeled with date of birth, name (useful for clustering), weight, height, hair color, eye color, sex, race, and some various goodies such as sentence duration and whether they re sex offenders. Here is the readme file: &amp;mdash;-BEGIN README&amp;mdash;- Scraped from the Illinois DOC.</description>
<size>6362542606</size>
</item><item>
<title>musicnet.tar.gz</title>
<category>Dataset</category>
<infohash>d2b2ae5e3ec4fd475d6e4c517d4c8752a7aa8455</infohash>
<guid>https://academictorrents.com/details/d2b2ae5e3ec4fd475d6e4c517d4c8752a7aa8455</guid>
<link>https://academictorrents.com/details/d2b2ae5e3ec4fd475d6e4c517d4c8752a7aa8455</link>
<description>MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note s position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping. The labels are verified by trained musicians; we estimate a labeling error rate of 4%. We offer the MusicNet labels to the machine learning and music communities as a resource for training models and a common benchmark for comparing results.</description>
<size>11097394998</size>
</item><item>
<title>NIH Chest X-ray Dataset (Resized to 224x224)</title>
<category>Dataset</category>
<infohash>e615d3aebce373f1dc8bd9d11064da55bdadede0</infohash>
<guid>https://academictorrents.com/details/e615d3aebce373f1dc8bd9d11064da55bdadede0</guid>
<link>https://academictorrents.com/details/e615d3aebce373f1dc8bd9d11064da55bdadede0</link>
<description>This dataset is resized versions of images to 224x224. ![](https://i.imgur.com/1InHgLs.png) (1, Atelectasis; 2, Cardiomegaly; 3, Effusion; 4, Infiltration; 5, Mass; 6, Nodule; 7, Pneumonia; 8, Pneumothorax; 9, Consolidation; 10, Edema; 11, Emphysema; 12, Fibrosis; 13, Pleural_Thickening; 14 Hernia) ### Background &amp; Motivation: Chest X-ray exam is one of the most frequent and cost-effective medical imaging examination. However clinical diagnosis of chest X-ray can be challenging, and sometimes believed to be harder than diagnosis via chest CT imaging. Even some promising work have been reported in the past, and especially in recent deep learning work on Tuberculosis (TB) classification. To achieve clinically relevant computer-aided detection and diagnosis (CAD) in real world medical sites on all data settings of chest X-rays is still very difficult, if not impossible when only several thousands of images are employed for study. This is evident from [2] where the performance deep neural networks for thorax disease recognition is severely limited by the availability of only 4143 frontal view images [3] (Openi is the previous largest publicly available chest X-ray dataset to date). In this database, we provide an enhanced version (with 6 more disease categories and more images as well) of the dataset used in the recent work [1] which is approximately 27 times of the number of frontal chest x-ray images in [3]. Our dataset is extracted from the clinical PACS database at National Institutes of Health Clinical Center and consists of ~60% of all frontal chest x-rays in the hospital. Therefore we expect this dataset is significantly more representative to the real patient population distributions and realistic clinical diagnosis challenges, than any previous chest x-ray datasets. Of course, the size of our dataset, in terms of the total numbers of images and thorax disease frequencies, would better facilitate deep neural network training [2]. Refer to [1] on the details of how the dataset is extracted and image labels are mined through natural language processing (NLP). ### Details: ChestX-ray dataset comprises 112,120 frontal-view X-ray images of 30,805 unique patients with the text-mined fourteen disease image labels (where each image can have multi-labels), mined from the associated radiological reports using natural language processing. Fourteen common thoracic pathologies include Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening, Cardiomegaly, Nodule, Mass and Hernia, which is an extension of the 8 common disease patterns listed in our CVPR2017 paper. Note that original radiology reports (associated with these chest x-ray studies) are not meant to be publicly shared for many reasons. The text-mined disease labels are expected to have accuracy &gt;90%.Please find more details and benchmark performance of trained models based on 14 disease labels in our arxiv paper: https://arxiv.org/abs/1705.02315 ### Contents: 1. 112,120 frontal-view chest X-ray PNG images in 1024*1024 resolution (under images folder) 2. Meta data for all images (Data_Entry_2017.csv): Image Index, Finding Labels, Follow-up #, Patient ID, Patient Age, Patient Gender, View Position, Original Image Size and Original Image Pixel Spacing. 3. Bounding boxes for ~1000 images (BBox_List_2017.csv):Image Index, Finding Label, Bbox[x, y, w, h]. [x y] are coordinates of each box s topleft corner. [w h] represent the width and height of each box. If you find the dataset useful for your research projects, please cite our CVPR 2017 paper:Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, MohammadhadiBagheri, Ronald M. Summers.ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471,2017</description>
<size>2513363817</size>
</item><item>
<title>airbnb</title>
<category>Dataset</category>
<infohash>4dbbc5424f8cb16c80fcb7f766c3ab9360d1e031</infohash>
<guid>https://academictorrents.com/details/4dbbc5424f8cb16c80fcb7f766c3ab9360d1e031</guid>
<link>https://academictorrents.com/details/4dbbc5424f8cb16c80fcb7f766c3ab9360d1e031</link>
<description>airbnb data: reviews and listings</description>
<size>8266885847</size>
</item><item>
<title>Ocular Disease Intelligent Recognition ODIR-5K</title>
<category>Dataset</category>
<infohash>cf3b8d5ecdd4284eb9b3a80fcfe9b1d621548f72</infohash>
<guid>https://academictorrents.com/details/cf3b8d5ecdd4284eb9b3a80fcfe9b1d621548f72</guid>
<link>https://academictorrents.com/details/cf3b8d5ecdd4284eb9b3a80fcfe9b1d621548f72</link>
<description>We collected a structured ophthalmic database of 5,000 patients with age, color fundus photographs from left and right eyes and doctors  diagnostic keywords from doctors (in short, ODIR-5K). This dataset is ‘‘real-life’’ set of patient information collected by Shanggong Medical Technology Co., Ltd. from different hospitals/medical centers in China. In these institutions, fundus images are captured by various cameras in the market, such as Canon, Zeiss and Kowa, resulting into varied image resolutions. Patient identifying information will be removed. Annotations are labeled by trained human readers with quality control management. They classify patient into eight labels including normal (N), diabetes (D), glaucoma (G), cataract (C), AMD (A), hypertension (H), myopia (M) and other diseases/abnormalities (O) based on both eye images and additionally patient age. The publishing of this dataset follows the ethical and privacy rules of China. Table 1 shows one record from ODIR-5K dataset. The 5,000 patients in this challenge are divided into training, off-site testing and on-site testing subsets. Almost 4,000 cases are used in training stage while others are for testing stages (off-site and on-site). Table 2 shows the distribution of case number with respect to eight labels in different stages. Note: one patient may contains one or multiple labels. https://i.imgur.com/vXa8rU9.png https://i.imgur.com/Hs7kYUF.png</description>
<size>1300482376</size>
</item><item>
<title>Wikipedia_FR_pagelinks_09-17-2019</title>
<category>Dataset</category>
<infohash>7f5f27e209b7bef83a12e8f0c8f664311d86d63f</infohash>
<guid>https://academictorrents.com/details/7f5f27e209b7bef83a12e8f0c8f664311d86d63f</guid>
<link>https://academictorrents.com/details/7f5f27e209b7bef83a12e8f0c8f664311d86d63f</link>
<description>This archive contains informations about links between pages in the French version of Wikipedia (as of 09-17-2019). It can be easily parsed and used for things such as e.g pathfinding algorithms testing, network visualisation... The graph contain 3,703,132 nodes and 155,766,981 edges. Pages titles and redirection tokens are provided as well.</description>
<size>460913877</size>
</item><item>
<title>Wikipedia_EN_pagelinks_09-17-2019</title>
<category>Dataset</category>
<infohash>30f1209431c206837e4cb5f62fda2a3106c77214</infohash>
<guid>https://academictorrents.com/details/30f1209431c206837e4cb5f62fda2a3106c77214</guid>
<link>https://academictorrents.com/details/30f1209431c206837e4cb5f62fda2a3106c77214</link>
<description>This archive contains informations about links between pages in the English version of Wikipedia (as of 09-17-2019). It can be easily parsed and used for things such as e.g pathfinding algorithms testing, network visualisation... The graph contain 14,784,722 nodes and 496,843,511 edges. Pages titles and redirection tokens are provided as well.</description>
<size>1679342323</size>
</item><item>
<title>L1000 Connectivity Map perturbational profiles from Broad Institute LINCS Center for Transcriptomics LINCS PHASE *II* (n=354,123; updated March 30, 2017) (Level 5 data)</title>
<category>Dataset</category>
<infohash>99970027a2a6bd6eceb8b9113346f899a50e17be</infohash>
<guid>https://academictorrents.com/details/99970027a2a6bd6eceb8b9113346f899a50e17be</guid>
<link>https://academictorrents.com/details/99970027a2a6bd6eceb8b9113346f899a50e17be</link>
<description>The Library of Integrated Cellular Signatures (LINCS) is an NIH program which funds the generation of perturbational profiles across multiple cell and perturbation types, as well as read-outs, at a massive scale. The LINCS Center for Transcriptomics at the Broad Institute uses the L1000 high-throughput gene-expression assay to build a Connectivity Map which seeks to enable the discovery of functional connections between drugs, genes and diseases through analysis of patterns induced by common gene-expression changes. This is Level 5 data: GSE70138_Broad_LINCS_Level5_COMPZ_n118050x12328_2017-03-06.gctx.gz Series GSE70138 L1000 data is provided at five levels of the data processing pipeline: Level 1: Raw unprocessed flow cytometry data from Luminex (LXB) Level 2: Gene expression values per 1000 genes after deconvolution (GEX) Level 3: Quantile-normalized gene expression profiles of landmark genes and imputed transcripts (Q2NORM or INF) Level 4: Gene signatures computed using z-scores relative to the plate population as control (ZSPCINF) or relative to the plate vehicle control (ZSVCINF) Level 5: Differential gene expression signatures https://i.imgur.com/zIeOFMt.png</description>
<size>5365179698</size>
</item><item>
<title>CAMELYON17</title>
<category>Dataset</category>
<infohash>fa82a73b0903c5a75cf24bc68a980727bb1a807e</infohash>
<guid>https://academictorrents.com/details/fa82a73b0903c5a75cf24bc68a980727bb1a807e</guid>
<link>https://academictorrents.com/details/fa82a73b0903c5a75cf24bc68a980727bb1a807e</link>
<description># CAMELYON17 Data Set ## Overview Built on the success of its predecessor, CAMELYON17 is the second grand challenge in pathology organised by the [Computational Pathology Group](http://www.diagnijmegen.nl/index.php/Digital_Pathology) of the Radboud University Medical Center (Radboudumc) in Nijmegen, The Netherlands. The goal of this challenge is to evaluate new and existing algorithms for automated detection and classification of breast cancer metastases in whole-slide images of histological lymph node sections. This task has high clinical relevance and would normally require extensive microscopic assessment by pathologists. The presence of metastases in lymph nodes has therapeutic implications for breast cancer patients. Therefore, an automated solution would hold great promise to reduce the workload of pathologists while at the same time reduce the subjectivity in diagnosis. For the complete description of the challenge and the data set please visit the [challenge](https://camelyon17.grand-challenge.org) website. ## Data ### Images The data in this challenge contains a total of 1000 whole-slide images (WSIs) of sentinel lymph node from 5 different medical centers from The Netherlands: Radboud University Medical Center in Nijmegen, Canisius-Wilhelmina Hospital in Nijmegen, University Medical Center Utrecht, Rijnstate Hospital in Arnhem, and Laboratorium Pathologie Oost-Nederland in Hengelo. The data set is divided into training and testing sets with 20 patients from each center in both sets. For each patient the shared 5 whole-slide images are zipped together into a single ZIP file. The patient pN-stages and the slide-level labels in the training set are shared in the *stage_labels.csv* file. The slides are converted to generic [TIFF](https://www.awaresystems.be/imaging/tiff/bigtiff.html) (Tagged Image File Format) using an open-source file converter, part of the [ASAP](https://github.com/GeertLitjens/ASAP) package. https://i.imgur.com/nKB1Kqq.png ### Annotations From each center 10 slides are exhaustively annotated and the annotations are shared in XML format. The XML files are compatible with the [ASAP](https://github.com/GeertLitjens/ASAP) software. You may download this software and visualize the annotations overlaid on the whole slide image. The provided XML files may have two groups of annotations ("metastases", or "normal") which can be accessed from the "PartOfGroup" attribute of the Annotation node in the XML file. Annotations belonging to group "metastases" represent tumor areas and annotations within group "normal" are non-tumor areas which have been cut-out from the original annotations in the "metastases" group.</description>
<size>2483605998126</size>
</item><item>
<title>2_2_5_1</title>
<category>Dataset</category>
<infohash>9aa9bada93ea915145d7a22cc4c8934904311d14</infohash>
<guid>https://academictorrents.com/details/9aa9bada93ea915145d7a22cc4c8934904311d14</guid>
<link>https://academictorrents.com/details/9aa9bada93ea915145d7a22cc4c8934904311d14</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>84938682596</size>
</item><item>
<title>2_2_4_1</title>
<category>Dataset</category>
<infohash>cc0b593b044e6bbd235b97765fa0659b726fc3ee</infohash>
<guid>https://academictorrents.com/details/cc0b593b044e6bbd235b97765fa0659b726fc3ee</guid>
<link>https://academictorrents.com/details/cc0b593b044e6bbd235b97765fa0659b726fc3ee</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>83717583379</size>
</item><item>
<title>2_2_3_1</title>
<category>Dataset</category>
<infohash>49ab3d7a43bd7efbaed42222a73aa5ecf260c64f</infohash>
<guid>https://academictorrents.com/details/49ab3d7a43bd7efbaed42222a73aa5ecf260c64f</guid>
<link>https://academictorrents.com/details/49ab3d7a43bd7efbaed42222a73aa5ecf260c64f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>95496624603</size>
</item><item>
<title>2_2_2_1</title>
<category>Dataset</category>
<infohash>b53885c1ec87b62ee700a5336e43b00b30c88d6e</infohash>
<guid>https://academictorrents.com/details/b53885c1ec87b62ee700a5336e43b00b30c88d6e</guid>
<link>https://academictorrents.com/details/b53885c1ec87b62ee700a5336e43b00b30c88d6e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>33622244405</size>
</item><item>
<title>2_2_1_4</title>
<category>Dataset</category>
<infohash>0267ad7477e3b63e2f7edc462482c725cb5f9f04</infohash>
<guid>https://academictorrents.com/details/0267ad7477e3b63e2f7edc462482c725cb5f9f04</guid>
<link>https://academictorrents.com/details/0267ad7477e3b63e2f7edc462482c725cb5f9f04</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54398022448</size>
</item><item>
<title>2_2_1_3</title>
<category>Dataset</category>
<infohash>4a2d4f51f02bcb82b36570b78f9d20a4299e1507</infohash>
<guid>https://academictorrents.com/details/4a2d4f51f02bcb82b36570b78f9d20a4299e1507</guid>
<link>https://academictorrents.com/details/4a2d4f51f02bcb82b36570b78f9d20a4299e1507</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54572913243</size>
</item><item>
<title>2_2_1_2</title>
<category>Dataset</category>
<infohash>5753cd7e35c0f2b8c55d40544cb234206cb33dc6</infohash>
<guid>https://academictorrents.com/details/5753cd7e35c0f2b8c55d40544cb234206cb33dc6</guid>
<link>https://academictorrents.com/details/5753cd7e35c0f2b8c55d40544cb234206cb33dc6</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>30850608122</size>
</item><item>
<title>2_2_1_1</title>
<category>Dataset</category>
<infohash>08fbd7fbe5b7143aec48f5b836d0fa532e5072a0</infohash>
<guid>https://academictorrents.com/details/08fbd7fbe5b7143aec48f5b836d0fa532e5072a0</guid>
<link>https://academictorrents.com/details/08fbd7fbe5b7143aec48f5b836d0fa532e5072a0</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>104969199738</size>
</item><item>
<title>2_1_10_3</title>
<category>Dataset</category>
<infohash>655386b5cc0474a728d9ace00a7d0d55809abd71</infohash>
<guid>https://academictorrents.com/details/655386b5cc0474a728d9ace00a7d0d55809abd71</guid>
<link>https://academictorrents.com/details/655386b5cc0474a728d9ace00a7d0d55809abd71</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>37687168222</size>
</item><item>
<title>2_1_10_2</title>
<category>Dataset</category>
<infohash>6e124aadd98e585c6a19233a179b38c37cdee700</infohash>
<guid>https://academictorrents.com/details/6e124aadd98e585c6a19233a179b38c37cdee700</guid>
<link>https://academictorrents.com/details/6e124aadd98e585c6a19233a179b38c37cdee700</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>31942905305</size>
</item><item>
<title>2_1_10_1</title>
<category>Dataset</category>
<infohash>725c1366252df41eee4425f656c664fcfa347ac3</infohash>
<guid>https://academictorrents.com/details/725c1366252df41eee4425f656c664fcfa347ac3</guid>
<link>https://academictorrents.com/details/725c1366252df41eee4425f656c664fcfa347ac3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>41032761241</size>
</item><item>
<title>2_1_9_2</title>
<category>Dataset</category>
<infohash>023233eac980a7625da06af54e58042db1c430e3</infohash>
<guid>https://academictorrents.com/details/023233eac980a7625da06af54e58042db1c430e3</guid>
<link>https://academictorrents.com/details/023233eac980a7625da06af54e58042db1c430e3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54743949762</size>
</item><item>
<title>2_1_9_1</title>
<category>Dataset</category>
<infohash>40d19ccc91ccda9b683a055b70bfaceab5022a6a</infohash>
<guid>https://academictorrents.com/details/40d19ccc91ccda9b683a055b70bfaceab5022a6a</guid>
<link>https://academictorrents.com/details/40d19ccc91ccda9b683a055b70bfaceab5022a6a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>129279355989</size>
</item><item>
<title>2_1_8_3</title>
<category>Dataset</category>
<infohash>5b84832da115d66645e985cbff647fd2babf9839</infohash>
<guid>https://academictorrents.com/details/5b84832da115d66645e985cbff647fd2babf9839</guid>
<link>https://academictorrents.com/details/5b84832da115d66645e985cbff647fd2babf9839</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>94272579443</size>
</item><item>
<title>2_1_8_2</title>
<category>Dataset</category>
<infohash>b8a7699844a8d24f2a193b01c480d2669d4c632d</infohash>
<guid>https://academictorrents.com/details/b8a7699844a8d24f2a193b01c480d2669d4c632d</guid>
<link>https://academictorrents.com/details/b8a7699844a8d24f2a193b01c480d2669d4c632d</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>81067262241</size>
</item><item>
<title>2_1_8_1</title>
<category>Dataset</category>
<infohash>a22844d4573edf4c778300edc556ec0a894b84d2</infohash>
<guid>https://academictorrents.com/details/a22844d4573edf4c778300edc556ec0a894b84d2</guid>
<link>https://academictorrents.com/details/a22844d4573edf4c778300edc556ec0a894b84d2</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>75593745261</size>
</item><item>
<title>2_1_7_3</title>
<category>Dataset</category>
<infohash>b017501233527200c04863715cba2bbcd079b138</infohash>
<guid>https://academictorrents.com/details/b017501233527200c04863715cba2bbcd079b138</guid>
<link>https://academictorrents.com/details/b017501233527200c04863715cba2bbcd079b138</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>65324716344</size>
</item><item>
<title>2_1_7_2</title>
<category>Dataset</category>
<infohash>fa6c25a09fd94427ad3ee3fb4ade3c90177dc5a3</infohash>
<guid>https://academictorrents.com/details/fa6c25a09fd94427ad3ee3fb4ade3c90177dc5a3</guid>
<link>https://academictorrents.com/details/fa6c25a09fd94427ad3ee3fb4ade3c90177dc5a3</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>98209347380</size>
</item><item>
<title>2_1_7_1</title>
<category>Dataset</category>
<infohash>0cd9c8523df6c11538871f5a70afb48d3970353f</infohash>
<guid>https://academictorrents.com/details/0cd9c8523df6c11538871f5a70afb48d3970353f</guid>
<link>https://academictorrents.com/details/0cd9c8523df6c11538871f5a70afb48d3970353f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>79176461124</size>
</item><item>
<title>2_1_6_1</title>
<category>Dataset</category>
<infohash>0a48698aea8f0c68033c874645a74cff866b09ec</infohash>
<guid>https://academictorrents.com/details/0a48698aea8f0c68033c874645a74cff866b09ec</guid>
<link>https://academictorrents.com/details/0a48698aea8f0c68033c874645a74cff866b09ec</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>74829645613</size>
</item><item>
<title>2_1_5_3</title>
<category>Dataset</category>
<infohash>330a4a941ebdf15df03fcdb10fb090d93832d2f4</infohash>
<guid>https://academictorrents.com/details/330a4a941ebdf15df03fcdb10fb090d93832d2f4</guid>
<link>https://academictorrents.com/details/330a4a941ebdf15df03fcdb10fb090d93832d2f4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>53070651376</size>
</item><item>
<title>2_1_5_2</title>
<category>Dataset</category>
<infohash>bc54007ec5581d1b7f251b8e034df94b878ec787</infohash>
<guid>https://academictorrents.com/details/bc54007ec5581d1b7f251b8e034df94b878ec787</guid>
<link>https://academictorrents.com/details/bc54007ec5581d1b7f251b8e034df94b878ec787</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>66267554591</size>
</item><item>
<title>2_1_5_1</title>
<category>Dataset</category>
<infohash>217ece45822a47622f7bfc3c37bea7a424813ef1</infohash>
<guid>https://academictorrents.com/details/217ece45822a47622f7bfc3c37bea7a424813ef1</guid>
<link>https://academictorrents.com/details/217ece45822a47622f7bfc3c37bea7a424813ef1</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>57848682004</size>
</item><item>
<title>2_1_4_1</title>
<category>Dataset</category>
<infohash>44b12f14e8df02280cf8aaf81fd508959ea6ac38</infohash>
<guid>https://academictorrents.com/details/44b12f14e8df02280cf8aaf81fd508959ea6ac38</guid>
<link>https://academictorrents.com/details/44b12f14e8df02280cf8aaf81fd508959ea6ac38</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>54448171863</size>
</item><item>
<title>2_1_3_1</title>
<category>Dataset</category>
<infohash>e28dfe009f0de89c8b17e4a5408bdda2c97fa5fb</infohash>
<guid>https://academictorrents.com/details/e28dfe009f0de89c8b17e4a5408bdda2c97fa5fb</guid>
<link>https://academictorrents.com/details/e28dfe009f0de89c8b17e4a5408bdda2c97fa5fb</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>46983653139</size>
</item><item>
<title>2_1_2_3</title>
<category>Dataset</category>
<infohash>fd28f9186e238573feb0af0cd96fd973f1ff7974</infohash>
<guid>https://academictorrents.com/details/fd28f9186e238573feb0af0cd96fd973f1ff7974</guid>
<link>https://academictorrents.com/details/fd28f9186e238573feb0af0cd96fd973f1ff7974</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>16248155693</size>
</item><item>
<title>2_1_2_2</title>
<category>Dataset</category>
<infohash>87fee66fbb72e4c638bc566293c64064ff756b96</infohash>
<guid>https://academictorrents.com/details/87fee66fbb72e4c638bc566293c64064ff756b96</guid>
<link>https://academictorrents.com/details/87fee66fbb72e4c638bc566293c64064ff756b96</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>84936573616</size>
</item><item>
<title>2_1_2_1</title>
<category>Dataset</category>
<infohash>0f4af3834a17cd5fb64148bc6b6dca4344234dff</infohash>
<guid>https://academictorrents.com/details/0f4af3834a17cd5fb64148bc6b6dca4344234dff</guid>
<link>https://academictorrents.com/details/0f4af3834a17cd5fb64148bc6b6dca4344234dff</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>21563737345</size>
</item><item>
<title>2_1_1_1</title>
<category>Dataset</category>
<infohash>b8c7a756fc878b48793782a9e9866b1041396eef</infohash>
<guid>https://academictorrents.com/details/b8c7a756fc878b48793782a9e9866b1041396eef</guid>
<link>https://academictorrents.com/details/b8c7a756fc878b48793782a9e9866b1041396eef</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>10324279237</size>
</item><item>
<title>Icentia11k: An Unsupervised ECG Representation Learning Dataset for Arrhythmia Subtype Discovery</title>
<category>Dataset</category>
<infohash>af04abfe9a3c96b30e5dd029eb185e19a7055272</infohash>
<guid>https://academictorrents.com/details/af04abfe9a3c96b30e5dd029eb185e19a7055272</guid>
<link>https://academictorrents.com/details/af04abfe9a3c96b30e5dd029eb185e19a7055272</link>
<description>We release the largest public ECG dataset of raw signals for representation learning containing over 11k patients and 2 billion labelled beats. Our goal is to enable semi-supervised ECG models to be made as well as to discover unknown subtypes of arrhythmia and anomalous ECG signal events. To this end, we propose an unsupervised representation learning task, evaluated in a semi-supervised fashion. We provide a set of baselines for different feature extractors that can be built upon. Additionally, we perform qualitative evaluations on results from PCA embeddings, where we identify some clustering of known subtypes indicating the potential for representation learning in arrhythmia sub-type discovery. https://i.imgur.com/5PxNneL.png License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) http://creativecommons.org/licenses/by-nc-sa/4.0/</description>
<size>271270264489</size>
</item><item>
<title>1_2_7_1</title>
<category>Dataset</category>
<infohash>9d2a0c4ce2029991de586f903569fdad83bbbfb1</infohash>
<guid>https://academictorrents.com/details/9d2a0c4ce2029991de586f903569fdad83bbbfb1</guid>
<link>https://academictorrents.com/details/9d2a0c4ce2029991de586f903569fdad83bbbfb1</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7162877627</size>
</item><item>
<title>1_2_6_1</title>
<category>Dataset</category>
<infohash>29af1f0db11e5e78dbc4827cc8a9c418e1658cb9</infohash>
<guid>https://academictorrents.com/details/29af1f0db11e5e78dbc4827cc8a9c418e1658cb9</guid>
<link>https://academictorrents.com/details/29af1f0db11e5e78dbc4827cc8a9c418e1658cb9</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>12984758196</size>
</item><item>
<title>1_2_5_3</title>
<category>Dataset</category>
<infohash>0382124e42c96e53b906399fb08075d299d9a18f</infohash>
<guid>https://academictorrents.com/details/0382124e42c96e53b906399fb08075d299d9a18f</guid>
<link>https://academictorrents.com/details/0382124e42c96e53b906399fb08075d299d9a18f</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>87941345850</size>
</item><item>
<title>1_2_5_2</title>
<category>Dataset</category>
<infohash>3f0d0a226d285ddac925d3458c9d13b3c33c9f0a</infohash>
<guid>https://academictorrents.com/details/3f0d0a226d285ddac925d3458c9d13b3c33c9f0a</guid>
<link>https://academictorrents.com/details/3f0d0a226d285ddac925d3458c9d13b3c33c9f0a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>45591118861</size>
</item><item>
<title>1_2_5_1</title>
<category>Dataset</category>
<infohash>8a13155a63be01a68869524094eeabeb6ca66f5e</infohash>
<guid>https://academictorrents.com/details/8a13155a63be01a68869524094eeabeb6ca66f5e</guid>
<link>https://academictorrents.com/details/8a13155a63be01a68869524094eeabeb6ca66f5e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>64467253764</size>
</item><item>
<title>1_2_4_3</title>
<category>Dataset</category>
<infohash>d7c1ccf7e98f7aff000a89b33c6b508689f8a825</infohash>
<guid>https://academictorrents.com/details/d7c1ccf7e98f7aff000a89b33c6b508689f8a825</guid>
<link>https://academictorrents.com/details/d7c1ccf7e98f7aff000a89b33c6b508689f8a825</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>86880352045</size>
</item><item>
<title>1_2_4_2</title>
<category>Dataset</category>
<infohash>5ba5bf586b309eecb5c1d554cc24d414d6cce7c4</infohash>
<guid>https://academictorrents.com/details/5ba5bf586b309eecb5c1d554cc24d414d6cce7c4</guid>
<link>https://academictorrents.com/details/5ba5bf586b309eecb5c1d554cc24d414d6cce7c4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>28288491311</size>
</item><item>
<title>1_2_4_1</title>
<category>Dataset</category>
<infohash>3e4037a44bd7a96409e2a62429012f62adfa85f4</infohash>
<guid>https://academictorrents.com/details/3e4037a44bd7a96409e2a62429012f62adfa85f4</guid>
<link>https://academictorrents.com/details/3e4037a44bd7a96409e2a62429012f62adfa85f4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>64747537337</size>
</item><item>
<title>1_2_3_1</title>
<category>Dataset</category>
<infohash>4e1a0b71bb11e622790c69ead91f493357b11cc4</infohash>
<guid>https://academictorrents.com/details/4e1a0b71bb11e622790c69ead91f493357b11cc4</guid>
<link>https://academictorrents.com/details/4e1a0b71bb11e622790c69ead91f493357b11cc4</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>3563663716</size>
</item><item>
<title>1_2_2_1</title>
<category>Dataset</category>
<infohash>d3fa2b7da3eb1433a54e2d4cc1388639b8d91ff5</infohash>
<guid>https://academictorrents.com/details/d3fa2b7da3eb1433a54e2d4cc1388639b8d91ff5</guid>
<link>https://academictorrents.com/details/d3fa2b7da3eb1433a54e2d4cc1388639b8d91ff5</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>6643301973</size>
</item><item>
<title>1_2_1_11</title>
<category>Dataset</category>
<infohash>296227b286e10782360662fc7e467521844039ec</infohash>
<guid>https://academictorrents.com/details/296227b286e10782360662fc7e467521844039ec</guid>
<link>https://academictorrents.com/details/296227b286e10782360662fc7e467521844039ec</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>36328738878</size>
</item><item>
<title>1_2_1_10</title>
<category>Dataset</category>
<infohash>40f4add0b188d58a3710c279364a444dc649c627</infohash>
<guid>https://academictorrents.com/details/40f4add0b188d58a3710c279364a444dc649c627</guid>
<link>https://academictorrents.com/details/40f4add0b188d58a3710c279364a444dc649c627</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>24214500213</size>
</item><item>
<title>1_2_1_9</title>
<category>Dataset</category>
<infohash>37098f43f34d3db9581e113cac51f8ef568ca3ce</infohash>
<guid>https://academictorrents.com/details/37098f43f34d3db9581e113cac51f8ef568ca3ce</guid>
<link>https://academictorrents.com/details/37098f43f34d3db9581e113cac51f8ef568ca3ce</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>33681182852</size>
</item><item>
<title>1_2_1_8</title>
<category>Dataset</category>
<infohash>e7e27512cdbe4bfcb13da1a9e9b46e64aaea9eec</infohash>
<guid>https://academictorrents.com/details/e7e27512cdbe4bfcb13da1a9e9b46e64aaea9eec</guid>
<link>https://academictorrents.com/details/e7e27512cdbe4bfcb13da1a9e9b46e64aaea9eec</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>22007981176</size>
</item><item>
<title>1_2_1_7</title>
<category>Dataset</category>
<infohash>4f9c69a0f9f38a1be30c758a2bfebe35fd259bad</infohash>
<guid>https://academictorrents.com/details/4f9c69a0f9f38a1be30c758a2bfebe35fd259bad</guid>
<link>https://academictorrents.com/details/4f9c69a0f9f38a1be30c758a2bfebe35fd259bad</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>28987371329</size>
</item><item>
<title>1_2_1_6</title>
<category>Dataset</category>
<infohash>e7ec1ac1086a983f35eb0a0842729bb91c980127</infohash>
<guid>https://academictorrents.com/details/e7ec1ac1086a983f35eb0a0842729bb91c980127</guid>
<link>https://academictorrents.com/details/e7ec1ac1086a983f35eb0a0842729bb91c980127</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17847600640</size>
</item><item>
<title>1_2_1_5</title>
<category>Dataset</category>
<infohash>398708c7516f076c71b9427bb1f3428b37130c05</infohash>
<guid>https://academictorrents.com/details/398708c7516f076c71b9427bb1f3428b37130c05</guid>
<link>https://academictorrents.com/details/398708c7516f076c71b9427bb1f3428b37130c05</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7988603561</size>
</item><item>
<title>1_2_1_4</title>
<category>Dataset</category>
<infohash>a716ee6694a6c0d155d72198311d032081fd3f8c</infohash>
<guid>https://academictorrents.com/details/a716ee6694a6c0d155d72198311d032081fd3f8c</guid>
<link>https://academictorrents.com/details/a716ee6694a6c0d155d72198311d032081fd3f8c</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>6966881460</size>
</item><item>
<title>1_2_1_3</title>
<category>Dataset</category>
<infohash>16b0a311e66d2dc4397028b8bc697841692e4e13</infohash>
<guid>https://academictorrents.com/details/16b0a311e66d2dc4397028b8bc697841692e4e13</guid>
<link>https://academictorrents.com/details/16b0a311e66d2dc4397028b8bc697841692e4e13</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>37999822908</size>
</item><item>
<title>1_2_1_2</title>
<category>Dataset</category>
<infohash>6b8a25cf41efaa0f2713037cce70d8b9cd4069af</infohash>
<guid>https://academictorrents.com/details/6b8a25cf41efaa0f2713037cce70d8b9cd4069af</guid>
<link>https://academictorrents.com/details/6b8a25cf41efaa0f2713037cce70d8b9cd4069af</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>14330629604</size>
</item><item>
<title>1_2_1_1</title>
<category>Dataset</category>
<infohash>8f4a68f666476cee1ed23a25446174c64f5746d8</infohash>
<guid>https://academictorrents.com/details/8f4a68f666476cee1ed23a25446174c64f5746d8</guid>
<link>https://academictorrents.com/details/8f4a68f666476cee1ed23a25446174c64f5746d8</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>13517774434</size>
</item><item>
<title>1_1_7_1</title>
<category>Dataset</category>
<infohash>cdd5ba23b4171329f40c2d475d15e4a5b85f6b3a</infohash>
<guid>https://academictorrents.com/details/cdd5ba23b4171329f40c2d475d15e4a5b85f6b3a</guid>
<link>https://academictorrents.com/details/cdd5ba23b4171329f40c2d475d15e4a5b85f6b3a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>31847591540</size>
</item><item>
<title>1_1_6_2</title>
<category>Dataset</category>
<infohash>1761a68a1091cba41a3cab91c54818d5962e67f6</infohash>
<guid>https://academictorrents.com/details/1761a68a1091cba41a3cab91c54818d5962e67f6</guid>
<link>https://academictorrents.com/details/1761a68a1091cba41a3cab91c54818d5962e67f6</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>48635156271</size>
</item><item>
<title>1_1_6_1</title>
<category>Dataset</category>
<infohash>ff8bcff06d8efe48ca3582f204309c2e54be0662</infohash>
<guid>https://academictorrents.com/details/ff8bcff06d8efe48ca3582f204309c2e54be0662</guid>
<link>https://academictorrents.com/details/ff8bcff06d8efe48ca3582f204309c2e54be0662</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17948049303</size>
</item><item>
<title>1_1_5_2</title>
<category>Dataset</category>
<infohash>db8844f1170168a46acb13a5a959ea69d0f15acc</infohash>
<guid>https://academictorrents.com/details/db8844f1170168a46acb13a5a959ea69d0f15acc</guid>
<link>https://academictorrents.com/details/db8844f1170168a46acb13a5a959ea69d0f15acc</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>3505790162</size>
</item><item>
<title>1_1_5_1</title>
<category>Dataset</category>
<infohash>75a1862c308ea35b1be2a274abe9b78b5e80a100</infohash>
<guid>https://academictorrents.com/details/75a1862c308ea35b1be2a274abe9b78b5e80a100</guid>
<link>https://academictorrents.com/details/75a1862c308ea35b1be2a274abe9b78b5e80a100</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>15752826271</size>
</item><item>
<title>1_1_4_3</title>
<category>Dataset</category>
<infohash>e5f5c1e417ec17415a4601993d8a6d09c3b8e652</infohash>
<guid>https://academictorrents.com/details/e5f5c1e417ec17415a4601993d8a6d09c3b8e652</guid>
<link>https://academictorrents.com/details/e5f5c1e417ec17415a4601993d8a6d09c3b8e652</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17206229696</size>
</item><item>
<title>1_1_4_2</title>
<category>Dataset</category>
<infohash>81429494652dd7b5d63da02f15f493142ce89c4a</infohash>
<guid>https://academictorrents.com/details/81429494652dd7b5d63da02f15f493142ce89c4a</guid>
<link>https://academictorrents.com/details/81429494652dd7b5d63da02f15f493142ce89c4a</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>15192093501</size>
</item><item>
<title>1_1_4_1</title>
<category>Dataset</category>
<infohash>ea902fdc17d143067b69db621c605809dd01e48b</infohash>
<guid>https://academictorrents.com/details/ea902fdc17d143067b69db621c605809dd01e48b</guid>
<link>https://academictorrents.com/details/ea902fdc17d143067b69db621c605809dd01e48b</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17943911017</size>
</item><item>
<title>1_1_3_1</title>
<category>Dataset</category>
<infohash>ba09df0894fc20eeb51d3c113c30a6a0dc850a81</infohash>
<guid>https://academictorrents.com/details/ba09df0894fc20eeb51d3c113c30a6a0dc850a81</guid>
<link>https://academictorrents.com/details/ba09df0894fc20eeb51d3c113c30a6a0dc850a81</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>18278735691</size>
</item><item>
<title>1_1_2_6</title>
<category>Dataset</category>
<infohash>f873043dc3b2275a3e843a952d9ed5a2bf053d4e</infohash>
<guid>https://academictorrents.com/details/f873043dc3b2275a3e843a952d9ed5a2bf053d4e</guid>
<link>https://academictorrents.com/details/f873043dc3b2275a3e843a952d9ed5a2bf053d4e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>21782662319</size>
</item><item>
<title>1_1_2_5</title>
<category>Dataset</category>
<infohash>95f671a91ed18c2d133ff6ec02ce62d3819d82ec</infohash>
<guid>https://academictorrents.com/details/95f671a91ed18c2d133ff6ec02ce62d3819d82ec</guid>
<link>https://academictorrents.com/details/95f671a91ed18c2d133ff6ec02ce62d3819d82ec</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>14769428277</size>
</item><item>
<title>1_1_2_4</title>
<category>Dataset</category>
<infohash>96da37ed62569ec33472f55acfdd7ea69a041d0b</infohash>
<guid>https://academictorrents.com/details/96da37ed62569ec33472f55acfdd7ea69a041d0b</guid>
<link>https://academictorrents.com/details/96da37ed62569ec33472f55acfdd7ea69a041d0b</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>11812188949</size>
</item><item>
<title>1_1_2_3</title>
<category>Dataset</category>
<infohash>17ecdb6fe2c0a9a262fdc6f35d4d5d9c82685b23</infohash>
<guid>https://academictorrents.com/details/17ecdb6fe2c0a9a262fdc6f35d4d5d9c82685b23</guid>
<link>https://academictorrents.com/details/17ecdb6fe2c0a9a262fdc6f35d4d5d9c82685b23</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>13277959359</size>
</item><item>
<title>1_1_2_2</title>
<category>Dataset</category>
<infohash>4c242162f4c3c8080bc56812aece150002c97487</infohash>
<guid>https://academictorrents.com/details/4c242162f4c3c8080bc56812aece150002c97487</guid>
<link>https://academictorrents.com/details/4c242162f4c3c8080bc56812aece150002c97487</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>7809063074</size>
</item><item>
<title>1_1_2_1</title>
<category>Dataset</category>
<infohash>24b4f39dda271d47550a9d65a351fba663f7171e</infohash>
<guid>https://academictorrents.com/details/24b4f39dda271d47550a9d65a351fba663f7171e</guid>
<link>https://academictorrents.com/details/24b4f39dda271d47550a9d65a351fba663f7171e</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>12161966045</size>
</item><item>
<title>1_1_1_2</title>
<category>Dataset</category>
<infohash>6e45cdfc5757744c1d69f43723d73eb4aa578a56</infohash>
<guid>https://academictorrents.com/details/6e45cdfc5757744c1d69f43723d73eb4aa578a56</guid>
<link>https://academictorrents.com/details/6e45cdfc5757744c1d69f43723d73eb4aa578a56</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>17902005169</size>
</item><item>
<title>1_1_1_1</title>
<category>Dataset</category>
<infohash>5be1feccbf9a9706cd28a691ff2701ad35b55075</infohash>
<guid>https://academictorrents.com/details/5be1feccbf9a9706cd28a691ff2701ad35b55075</guid>
<link>https://academictorrents.com/details/5be1feccbf9a9706cd28a691ff2701ad35b55075</link>
<description>Navigation and localisation dataset for self driving cars and autonomous robots.</description>
<size>9987887390</size>
</item><item>
<title>dHCP 2nd data release -- all torrents</title>
<category>Dataset</category>
<infohash>5e70f4f907aa02a4a6308b85cff9762f2e2a036b</infohash>
<guid>https://academictorrents.com/details/5e70f4f907aa02a4a6308b85cff9762f2e2a036b</guid>
<link>https://academictorrents.com/details/5e70f4f907aa02a4a6308b85cff9762f2e2a036b</link>
<description/>
<size>8085121</size>
</item><item>
<title>dHCP 2nd data release -- dMRI pipeline</title>
<category>Dataset</category>
<infohash>ce11d11d267500308ec39e2f747edd9721efe773</infohash>
<guid>https://academictorrents.com/details/ce11d11d267500308ec39e2f747edd9721efe773</guid>
<link>https://academictorrents.com/details/ce11d11d267500308ec39e2f747edd9721efe773</link>
<description/>
<size>192116257808</size>
</item><item>
<title>dHCP 2nd data release -- fMRI pipeline</title>
<category>Dataset</category>
<infohash>7197ed06604fcc7791d321afc229efe7c24dc472</infohash>
<guid>https://academictorrents.com/details/7197ed06604fcc7791d321afc229efe7c24dc472</guid>
<link>https://academictorrents.com/details/7197ed06604fcc7791d321afc229efe7c24dc472</link>
<description/>
<size>497834063301</size>
</item><item>
<title>dHCP 2nd data release -- anatomical pipeline</title>
<category>Dataset</category>
<infohash>eaa25083f5ef8b56ec203b0ba38c42842adaa47d</infohash>
<guid>https://academictorrents.com/details/eaa25083f5ef8b56ec203b0ba38c42842adaa47d</guid>
<link>https://academictorrents.com/details/eaa25083f5ef8b56ec203b0ba38c42842adaa47d</link>
<description/>
<size>585883372778</size>
</item><item>
<title>Computer Vision &amp; Pattern Recognition (CVPR) 2019</title>
<category>Paper</category>
<infohash>c2400749c8d07ee9e7c3d72de1c8262ae251239e</infohash>
<guid>https://academictorrents.com/details/c2400749c8d07ee9e7c3d72de1c8262ae251239e</guid>
<link>https://academictorrents.com/details/c2400749c8d07ee9e7c3d72de1c8262ae251239e</link>
<description>Complete Computer Vision &amp; Pattern Recognition (CVPR) 2019 Conference Papers, from open access site http://openaccess.thecvf.com/CVPR2019.py</description>
<size>6210369447</size>
</item><item>
<title>dHCP 2nd data release -- sourcedata</title>
<category>Dataset</category>
<infohash>515e2989eedc853a8e256424de112f6f48f10d80</infohash>
<guid>https://academictorrents.com/details/515e2989eedc853a8e256424de112f6f48f10d80</guid>
<link>https://academictorrents.com/details/515e2989eedc853a8e256424de112f6f48f10d80</link>
<description/>
<size>737744253175</size>
</item><item>
<title>Replicated GPT-2 1.5B Parameter Model</title>
<category>Dataset</category>
<infohash>af468cfbb0284a35e706f5ae9b5dbcb45684f9d2</infohash>
<guid>https://academictorrents.com/details/af468cfbb0284a35e706f5ae9b5dbcb45684f9d2</guid>
<link>https://academictorrents.com/details/af468cfbb0284a35e706f5ae9b5dbcb45684f9d2</link>
<description/>
<size>5787581694</size>
</item><item>
<title>1000 Fundus images with 39 categories</title>
<category>Dataset</category>
<infohash>6d239d7d6c23f8b2a8046cca7078a7e10c6889d0</infohash>
<guid>https://academictorrents.com/details/6d239d7d6c23f8b2a8046cca7078a7e10c6889d0</guid>
<link>https://academictorrents.com/details/6d239d7d6c23f8b2a8046cca7078a7e10c6889d0</link>
<description>All these 1000 fundus images which belong to 39 classes are come from the Joint Shantou International Eye Centre (JSIEC), Shantou city, Guangdong province ,China. These images are a small part of total 209,494 fundus images to be used for training validating and testing our deep learning platform. The copyright of these images belongs to JSIEC, and can be freely used for any purpose. https://i.imgur.com/kWGUlMo.jpg     3.1M	1000images/12.Disc swelling and elevation 2.6M	1000images/26.Fibrosis 23M	1000images/0.0.Normal 2.4M	1000images/15.1.Bietti crystalline dystrophy 3.8M	1000images/22.Cotton-wool spots 9.8M	1000images/8.MH 4.5M	1000images/24.Chorioretinal atrophy-coloboma 3.4M	1000images/25.Preretinal hemorrhage 2.9M	1000images/14.Congenital disc abnormality 4.7M	1000images/28.Silicon oil in eye 19M	1000images/1.1.DR3 3.5M	1000images/16.Peripheral retinal degeneration and break 4.4M	1000images/10.0.Possible glaucoma 28M	1000images/1.0.DR2 19M	1000images/0.2.Large optic cup 8.9M	1000images/7.ERM 22M	1000images/9.Pathological myopia 2.8M	1000images/20.Massive hard exudates 74M	1000images/29.0.Blur fundus without PDR 12M	1000images/15.0.Retinitis pigmentosa 2.8M	1000images/13.Dragged Disc 5.6M	1000images/5.1.VKH disease 27M	1000images/4.Rhegmatogenous RD 33M	1000images/6.Maculopathy 12M	1000images/0.3.DR1 12M	1000images/21.Yellow-white spots-flecks 19M	1000images/2.0.BRVO 1.6M	1000images/19.Fundus neoplasm 13M	1000images/29.1.Blur fundus with suspected PDR 4.6M	1000images/2.1.CRVO 5.0M	1000images/23.Vessel tortuosity 5.5M	1000images/10.1.Optic atrophy 6.2M	1000images/5.0.CSCR 4.3M	1000images/11.Severe hypertensive retinopathy 2.8M	1000images/17.Myelinated nerve fiber 6.1M	1000images/0.1.Tessellated fundus 5.3M	1000images/27.Laser Spots 3.7M	1000images/18.Vitreous particles 3.6M	1000images/3.RAO 429M	1000images</description>
<size>402759715</size>
</item><item>
<title>MRI Dataset for Hippocampus Segmentation (HFH) (hippseg_2011)</title>
<category>Dataset</category>
<infohash>d019f4f082f3fda94f0f74577b50dc30beee7bf8</infohash>
<guid>https://academictorrents.com/details/d019f4f082f3fda94f0f74577b50dc30beee7bf8</guid>
<link>https://academictorrents.com/details/d019f4f082f3fda94f0f74577b50dc30beee7bf8</link>
<description>This dataset contains T1-weighted MR images of 50 subjects, 40 of whom are patients with temporal lobe epilepsy and 10 are nonepileptic subjects. Hippocampus labels are provided for 25 subjects for training. The users may submit their segmentation outcomes for the remaining 25 testing images to get a table of segmentation metrics. https://i.imgur.com/XSJr6oQ.png https://i.imgur.com/jWpnVeu.gif     HFH ├── ReadMe.txt ├── Test │   ├── HFH_026.hdr │   ├── HFH_026.img │   ├── HFH_027.hdr │   ├── HFH_027.img │   ├── HFH_028.hdr │   ├── HFH_028.img │   ├── HFH_029.hdr │   ├── HFH_029.img │   ├── HFH_030.hdr │   ├── HFH_030.img │   ├── HFH_031.hdr │   ├── HFH_031.img │   ├── HFH_032.hdr │   ├── HFH_032.img │   ├── HFH_033.hdr │   ├── HFH_033.img │   ├── HFH_034.hdr │   ├── HFH_034.img │   ├── HFH_035.hdr │   ├── HFH_035.img │   ├── HFH_036.hdr │   ├── HFH_036.img │   ├── HFH_037.hdr │   ├── HFH_037.img │   ├── HFH_038.hdr │   ├── HFH_038.img │   ├── HFH_039.hdr │   ├── HFH_039.img │   ├── HFH_040.hdr │   ├── HFH_040.img │   ├── HFH_041.hdr │   ├── HFH_041.img │   ├── HFH_042.hdr │   ├── HFH_042.img │   ├── HFH_043.hdr │   ├── HFH_043.img │   ├── HFH_044.hdr │   ├── HFH_044.img │   ├── HFH_045.hdr │   ├── HFH_045.img │   ├── HFH_046.hdr │   ├── HFH_046.img │   ├── HFH_047.hdr │   ├── HFH_047.img │   ├── HFH_048.hdr │   ├── HFH_048.img │   ├── HFH_049.hdr │   ├── HFH_049.img │   ├── HFH_050.hdr │   └── HFH_050.img └── Train ├── HFH_001.hdr ├── HFH_001.img ├── HFH_002.hdr ├── HFH_002.img ├── HFH_003.hdr ├── HFH_003.img ├── HFH_004.hdr ├── HFH_004.img ├── HFH_005.hdr ├── HFH_005.img ├── HFH_006.hdr ├── HFH_006.img ├── HFH_007.hdr ├── HFH_007.img ├── HFH_008.hdr ├── HFH_008.img ├── HFH_009.hdr ├── HFH_009.img ├── HFH_010.hdr ├── HFH_010.img ├── HFH_011.hdr ├── HFH_011.img ├── HFH_012.hdr ├── HFH_012.img ├── HFH_013.hdr ├── HFH_013.img ├── HFH_014.hdr ├── HFH_014.img ├── HFH_015.hdr ├── HFH_015.img ├── HFH_016.hdr ├── HFH_016.img ├── HFH_017.hdr ├── HFH_017.img ├── HFH_018.hdr ├── HFH_018.img ├── HFH_019.hdr ├── HFH_019.img ├── HFH_020.hdr ├── HFH_020.img ├── HFH_021.hdr ├── HFH_021.img ├── HFH_022.hdr ├── HFH_022.img ├── HFH_023.hdr ├── HFH_023.img ├── HFH_024.hdr ├── HFH_024.img ├── HFH_025.hdr ├── HFH_025.img └── Labels ├── HFH_001_Hipp_Labels.hdr ├── HFH_001_Hipp_Labels.img ├── HFH_002_Hipp_Labels.hdr ├── HFH_002_Hipp_Labels.img ├── HFH_003_Hipp_Labels.hdr ├── HFH_003_Hipp_Labels.img ├── HFH_004_Hipp_Labels.hdr ├── HFH_004_Hipp_Labels.img ├── HFH_005_Hipp_Labels.hdr ├── HFH_005_Hipp_Labels.img ├── HFH_006_Hipp_Labels.hdr ├── HFH_006_Hipp_Labels.img ├── HFH_007_Hipp_Labels.hdr ├── HFH_007_Hipp_Labels.img ├── HFH_008_Hipp_Labels.hdr ├── HFH_008_Hipp_Labels.img ├── HFH_009_Hipp_Labels.hdr ├── HFH_009_Hipp_Labels.img ├── HFH_010_Hipp_Labels.hdr ├── HFH_010_Hipp_Labels.img ├── HFH_011_Hipp_Labels.hdr ├── HFH_011_Hipp_Labels.img ├── HFH_012_Hipp_Labels.hdr ├── HFH_012_Hipp_Labels.img ├── HFH_013_Hipp_Labels.hdr ├── HFH_013_Hipp_Labels.img ├── HFH_014_Hipp_Labels.hdr ├── HFH_014_Hipp_Labels.img ├── HFH_015_Hipp_Labels.hdr ├── HFH_015_Hipp_Labels.img ├── HFH_016_Hipp_Labels.hdr ├── HFH_016_Hipp_Labels.img ├── HFH_017_Hipp_Labels.hdr ├── HFH_017_Hipp_Labels.img ├── HFH_018_Hipp_Labels.hdr ├── HFH_018_Hipp_Labels.img ├── HFH_019_Hipp_Labels.hdr ├── HFH_019_Hipp_Labels.img ├── HFH_020_Hipp_Labels.hdr ├── HFH_020_Hipp_Labels.img ├── HFH_021_Hipp_Labels.hdr ├── HFH_021_Hipp_Labels.img ├── HFH_022_Hipp_Labels.hdr ├── HFH_022_Hipp_Labels.img ├── HFH_023_Hipp_Labels.hdr ├── HFH_023_Hipp_Labels.img ├── HFH_024_Hipp_Labels.hdr ├── HFH_024_Hipp_Labels.img ├── HFH_025_Hipp_Labels.hdr └── HFH_025_Hipp_Labels.img 3 directories, 151 files</description>
<size>598881636</size>
</item><item>
<title>The Moral Psychology Handbook</title>
<category>Paper</category>
<infohash>90493c18f577d24d5646c5075193bf57faabdcf6</infohash>
<guid>https://academictorrents.com/details/90493c18f577d24d5646c5075193bf57faabdcf6</guid>
<link>https://academictorrents.com/details/90493c18f577d24d5646c5075193bf57faabdcf6</link>
<description>Torrent of academic paper https://www.tandfonline.com/doi/abs/10.1080/09515089.2012.729488</description>
<size>150932</size>
</item><item>
<title>Cognition in context : new approaches to new Islamist movements in the Middle East</title>
<category>Paper</category>
<infohash>60ed1ceb461daec1c31ce69e0ce216c416481372</infohash>
<guid>https://academictorrents.com/details/60ed1ceb461daec1c31ce69e0ce216c416481372</guid>
<link>https://academictorrents.com/details/60ed1ceb461daec1c31ce69e0ce216c416481372</link>
<description>In the past two decades cognitive anthropology has offered a radically new framework for the study of social movements and complex ideologies. Besides creating a scientific foundation for the study of religion and culture, its empirical basis offers a less biased approach to controversial subjects such as new religious movements and religious violence that traditional anthropological approaches have struggled to maintain. This paper argues that new religious movements can be analysed using the tools of cognitive science, specifically new Islamist movements in the Middle East affiliated with Al-Qaeda. Such an approach yields an objective lens to analyse the claims that their ideologies make them violent. By presenting a brief analysis of movements inspired from the Sunni tradition in the 20th century this paper intends to show that the causal factors of religious violence are largely the product of the dynamic mental mechanisms interacting with a physical and social environment.</description>
<size>202645</size>
</item><item>
<title>Rethinking Complexity and Culture: Cognitive Science as Explanatory Framework for Cultural Phenomena</title>
<category>Paper</category>
<infohash>808bab93cf2579e8235a64fac75c38924b0cac9f</infohash>
<guid>https://academictorrents.com/details/808bab93cf2579e8235a64fac75c38924b0cac9f</guid>
<link>https://academictorrents.com/details/808bab93cf2579e8235a64fac75c38924b0cac9f</link>
<description>Torrent of academic paper https://brill.com/view/journals/jocc/15/5/article-p435_1.xml?lang=en</description>
<size>482002</size>
</item><item>
<title>Potential causes of ritual instability in doctrinal new religious movements : a cognitive hypothesis</title>
<category>Paper</category>
<infohash>db70d09d220f72e468f98dc7ebce9e15f85009ce</infohash>
<guid>https://academictorrents.com/details/db70d09d220f72e468f98dc7ebce9e15f85009ce</guid>
<link>https://academictorrents.com/details/db70d09d220f72e468f98dc7ebce9e15f85009ce</link>
<description>Within the animal kingdom, hierarchical social structures appear in very similar forms, even if the organisms that make up the social structure differ drastically. Hierarchical social structures and apparent power centralization patterns can be witnessed in insects such as ants and bees, avian species such as chickens and vultures, and mammals such as wolves and humans. Here, an attempt will be made to apply conceptions and terminology of evolutionary theory, concerning alpha male, charismatic leaders in new religious movements (nrms), and cognitive psychology in an interdisciplinary explanation for ritual instability while testing established ritual hypotheses. This will be done by hypothesizing how charismatic alphas attain their status within religious groups and how this presence affects the ritual stability of the group from a cognitive level. Torrent of academic paper https://digilib.phil.muni.cz/handle/11222.digilib/118517</description>
<size>405924</size>
</item><item>
<title>Semantic network mapping of religious material: testing multi-agent computer models of social theories against real-world data</title>
<category>Paper</category>
<infohash>910264e978bff4d8e0e0051600dfb8c2e028523c</infohash>
<guid>https://academictorrents.com/details/910264e978bff4d8e0e0051600dfb8c2e028523c</guid>
<link>https://academictorrents.com/details/910264e978bff4d8e0e0051600dfb8c2e028523c</link>
<description>Agent-based modeling allows researchers to investigate theories of complex social phenomena and subsequently use the model to generate new hypotheses that can then be compared to real-world data. However, computer modeling has been underutilized in regard to the understanding of religious systems, which often require very complex theories with multiple interacting variables (Braxton et al. in Method Theory Study Relig 24(3):267–290, 2012. doi: 10.1163/157006812X635709; Lane in J Cogn Sci Relig 1(2):161–180, 2013). This paper presents an example of how computer modeling can be used to explore, test, and further understand religious systems, specifically looking at one prominent theory of religious ritual. The process is continuous: theory building, hypothesis generation, testing against real-world data, and improving the model. In this example, the output of an agent-based model of religious behavior is compared against real-world religious sermons and texts using semantic network analysis. It finds that most religious materials exhibit unique scale-free small-world properties and that a concept’s centrality in a religious schema best predicts its frequency of presentation. These results reveal that there adjustments need to be made to existing models of religious ritual systems and provide parameters for future models. The paper ends with a discussion of implications for a new multi-agent model of doctrinal ritual behaviors as well as propositions for further interdisciplinary research concerning the multi-agent modeling of religious ritual behaviors. Torrent of academic paper https://link.springer.com/article/10.1007/s10339-015-0649-1</description>
<size>338887</size>
</item><item>
<title>Strengthening the supernatural punishment hypothesis through computer modeling</title>
<category>Paper</category>
<infohash>d94726a32c9a274d2ed07a7bd093a44631849ed8</infohash>
<guid>https://academictorrents.com/details/d94726a32c9a274d2ed07a7bd093a44631849ed8</guid>
<link>https://academictorrents.com/details/d94726a32c9a274d2ed07a7bd093a44631849ed8</link>
<description>Torrent of academic paper, computer code, and data for https://www.tandfonline.com/doi/abs/10.1080/2153599X.2017.1302977</description>
<size>30528949</size>
</item><item>
<title>Method, Theory, and Multi-Agent Artificial Intelligence: Creating computer models of complex social interaction</title>
<category>Paper</category>
<infohash>927f56a60ac77383caa4c189a82362be69a09521</infohash>
<guid>https://academictorrents.com/details/927f56a60ac77383caa4c189a82362be69a09521</guid>
<link>https://academictorrents.com/details/927f56a60ac77383caa4c189a82362be69a09521</link>
<description>The construction of computer models is becoming an increasingly useful and popular way of testing theories in the cognitive sciences. This paper will present a brief overview of the methods available for constructing and testing computer models of social phenomena such as religious beliefs and behaviors. It will focus on the importance of theoretical continuity and data replication in computer modelling while negotiating the relationship between specificity and ecological validity when models are extended into novel contexts. This paper will argue that computer modeling is an important supplement to the methodological toolbox of cognitive scientists interested in human social phenomena. However, this is only the case if developers pay close attention to research methods and theories and if the method of a model’s development is appropriate for the target phenomenon (Sun, 2006). It concludes that multi-agent AI models are the most appropriate computational tool for the study of complex social phenomena. Torrent of academic paper https://journals.equinoxpub.com/index.php/JCSR/article/view/18235</description>
<size>353504</size>
</item><item>
<title>Can we predict religious extremism?</title>
<category>Paper</category>
<infohash>a60de346e39e35f6b5c435713e0053a239453dd1</infohash>
<guid>https://academictorrents.com/details/a60de346e39e35f6b5c435713e0053a239453dd1</guid>
<link>https://academictorrents.com/details/a60de346e39e35f6b5c435713e0053a239453dd1</link>
<description>Given events such as 11 September, the 2013 Boston Bombing, and the 2015 Paris attacks it is becoming increasingly apparent that religious extremism has great potential to negatively impact our daily lives. Predicting religious extremism could – in principle – allow us to respond to, mediate, or eliminate threats more efficiently. It is argued here that predicting religious extremism is possible but religious systems are complex dynamic systems and should be addressed as such. To address religious systems in a way that could provide useful predictions, one should use multi-agent artificial intelligence models that are validated using empirical studies of human cognition to define rules for the agents and historical and contemporary data sources (ex. “big-data” and historical databases) to calibrate and parameterize simulations. Ultimately, I conclude that near-term prediction is possible if one incorporates social and biological environments as well as inter- and intra-agent cognitive mechanisms, but long term predictions would be unreliable. Key to this approach is the admission that cognitive mechanisms play crucial roles in the generation and transmission of culture as well as the recognition that social and biological environments provide input to these mechanisms but neither social or biological environmental input is sufficient by itself. Torrent of academic paper https://www.tandfonline.com/doi/abs/10.1080/2153599X.2016.1249923</description>
<size>842455</size>
</item><item>
<title>Big Data and Anthropology: Concerns for data collection in a new research context</title>
<category>Paper</category>
<infohash>4cecafb8ee0051824cc3f84931f2ed7784442179</infohash>
<guid>https://academictorrents.com/details/4cecafb8ee0051824cc3f84931f2ed7784442179</guid>
<link>https://academictorrents.com/details/4cecafb8ee0051824cc3f84931f2ed7784442179</link>
<description>Traditionally, anthropologists have worked within relatively small groups of individuals; at least relative to the scope of modern big-data analytics. Traditionally, we have known our informants and participants and likely have had some personal relationship or connection with them at some level. Such research has carried with it a tradition of protection whereby anthropologists are keenly aware that we are often working in fragile parts of human societies and asking personal questions; therefore we have strived to protect the identities of our informants. However, the modern digital environment is one whereby we have access to individual’s data, sometimes deeply personal data, at the touch of a button. Given this massive amount of unique individual data, one can reverse-engineer the data in order to obtain the specific identity of the person, even if their name is changed or erased from that data. In addition, it is often the case that, when a researcher obtains social network data—even when assuming complete consent and legal transfer of the information—information concerning real individuals who have not consented to participate in the research is also transmitted. Generally, this paper argues that we have not given enough thought to such problems as online data becomes of increasing interest by anthropology. The presentation concludes that anthropologists must keep in mind a combination of “traditional” research values as well as the fact that we are in a new frontier of information as we enter the world of “big-data”. It finishes with some suggestions for participant protection. Torrent of academic paper https://anthro.web.ox.ac.uk/sites/default/files/anthro/documents/media/jaso8_1_2016_74_88.pdf</description>
<size>284742</size>
</item><item>
<title>Longitudinal diabetic retinopathy screening data</title>
<category>Dataset</category>
<infohash>744717095e59373186abec814c86de4831d889e9</infohash>
<guid>https://academictorrents.com/details/744717095e59373186abec814c86de4831d889e9</guid>
<link>https://academictorrents.com/details/744717095e59373186abec814c86de4831d889e9</link>
<description>This data set contains repeated 4-field color fundus photos (1120 in total) of 70 patients in the diabetic retinopathy screening program of the Rotterdam Eye Hospital (Rotterdam, The Netherlands); the result of intra- and inter-visit registration by two methods (i2kRetina and WeVaR); and the grading and ranking of these results by two graders. ### Inclusion criteria: All patients with diabetes who were screened for diabetic retinopathy during one week in June 2013. ### Exclusion criteria: First-time patients and Patients who were not examined in the year before. Fundus image of both eyes of each patient were acquired using a non-mydriatic digital fundus camera (Topcon TRC-NW65) with a 45 degrees field of view after pupil dilation. Images of each visit were registered by two methods: WeVaR (see Adal et al, A Hierarchical Coarse-to-Fine Approach for Image Registration) and i2k Retina (DualAlign LLC; see here). Image mosaic movies were created from the registered, normalized fundus images and were presented to two graders. In addition, the graders compared the image mosaics side-by-side and ranked them. Images of consecutive visits were similarly registered and the image mosaic movies were again presented to two graders. Included data: Patient s gender and age All color fundus images All normalized fundus images Intra-visit image mosaic movies for both registration methods Intra-visit Image mosaics for both registration methods Inter-visit image mosaic movies for both registration methods Scores of two graders (not every movie/image was graded by both graders) ### Publications This data set, or a part thereof, was used by us in the following paper(s). Please cite one or more of these papers in any of your publication(s) that uses (parts of) this data set. K.M. Adal, P.G. van Etten, J.P. Martinez, L.J. van Vliet, K.A. Vermeer. Accuracy Assessment of Intra and Inter-Visit Fundus Image Registration for Diabetic Retinopathy Screening. Invest Ophthalmol Vis Sci. 2015. Accepted for publication. ### Contributors The following people did all the hard work on assembling and releasing this data set: Kedir Adal, Peter van Etten, Jose Martinez and Koen Vermeer https://i.imgur.com/suusenq.png</description>
<size>4857449318</size>
</item><item>
<title>Jr, Lane - 2012 - Ancestors in the simulation machine measuring the transmission and oscillation of religiosity in computer modeling.pdf</title>
<category>Paper</category>
<infohash>d6d46790bf7525c7beedee763256e3424fb37590</infohash>
<guid>https://academictorrents.com/details/d6d46790bf7525c7beedee763256e3424fb37590</guid>
<link>https://academictorrents.com/details/d6d46790bf7525c7beedee763256e3424fb37590</link>
<description>Torrent of academic paper https://www.tandfonline.com/doi/abs/10.1080/2153599X.2012.703454</description>
<size>946503</size>
</item><item>
<title>RIGA dataset (Retinal fundus images for glaucoma analysis)</title>
<category>Dataset</category>
<infohash>eb9dd9216a1c9a622250ad70a400204e7531196d</infohash>
<guid>https://academictorrents.com/details/eb9dd9216a1c9a622250ad70a400204e7531196d</guid>
<link>https://academictorrents.com/details/eb9dd9216a1c9a622250ad70a400204e7531196d</link>
<description>A de-identified dataset of retinal fundus images for glaucoma analysis (RIGA) was derived from three sources. The optic cup and disc boundaries of these images were marked and annotated manually by six experienced ophthalmologists individually using a tablet and a precise pen. Six parameters were extracted and assessed among the ophthalmologists. The inter-observer annotations were compared by calculating the standard deviation (SD) for every image between the six ophthalmologists in order to determine if there are any outliers among the six annotations to be eliminated i.e. filtering the images. The dataset includes 3 different files: 1) MESSIDOR dataset file contains 460 original images and 460 images for every single ophthalmologist manual marking in total of 3220 images for the entire file. 2) Bin Rushed Ophthalmic center file and contains 195 original images and 195 images for every sin...  [more] Ahmed Almazroa, Sami Alodhayb, Essameldin Osman, Eslam Ramadan, Mohammed Hummadi, Mohammed Dlaim, Muhannad Alkatee, Kaamran Raahemifar, Vasudevan Lakshminarayanan, "Retinal fundus images for glaucoma analysis: the RIGA dataset", Proc. SPIE 10579, Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications, 105790B (6 March 2018); doi: 10.1117/12.2293584; https://doi.org/10.1117/12.2293584 https://i.imgur.com/5y3h0Vr.png</description>
<size>13840106500</size>
</item><item>
<title>Minecraft Skins</title>
<category>Dataset</category>
<infohash>14cf27fca7f26714d2a5193dc95348a4712cdcdf</infohash>
<guid>https://academictorrents.com/details/14cf27fca7f26714d2a5193dc95348a4712cdcdf</guid>
<link>https://academictorrents.com/details/14cf27fca7f26714d2a5193dc95348a4712cdcdf</link>
<description>An image data set containing 900,000+ Images of unique Minecraft skins of real players. Could be used for training a GAN or for other image related applications.</description>
<size>2471638528</size>
</item><item>
<title>Twitch Emotes Images Dataset</title>
<category>Dataset</category>
<infohash>168649d9e29662e033d8db9c7bf0077c793d36c8</infohash>
<guid>https://academictorrents.com/details/168649d9e29662e033d8db9c7bf0077c793d36c8</guid>
<link>https://academictorrents.com/details/168649d9e29662e033d8db9c7bf0077c793d36c8</link>
<description>This is a dataset containing over 1,200,000 images of twitch real twitch emotes. Most emotes (99.99%) are 28 by 28 Could be used to create a GAN or for other applications. Examples: https://i.imgur.com/CJKTaWM.png</description>
<size>4286330880</size>
</item><item>
<title>P. vivax (malaria) infected human blood smears (BBBC041)</title>
<category>Dataset</category>
<infohash>2fed90eeaa0fbf98aba474c5d7e56f6290121507</infohash>
<guid>https://academictorrents.com/details/2fed90eeaa0fbf98aba474c5d7e56f6290121507</guid>
<link>https://academictorrents.com/details/2fed90eeaa0fbf98aba474c5d7e56f6290121507</link>
<description>### Description of the biological application Malaria is a disease caused by Plasmodium parasites that remains a major threat in global health, affecting 200 million people and causing 400,000 deaths a year. The main species of malaria that affect humans are Plasmodium falciparum and Plasmodium vivax. For malaria as well as other microbial infections, manual inspection of thick and thin blood smears by trained microscopists remains the gold standard for parasite detection and stage determination because of its low reagent and instrument cost and high flexibility. Despite manual inspection being extremely low throughput and susceptible to human bias, automatic counting software remains largely unused because of the wide range of variations in brightfield microscopy images. However, a robust automatic counting and cell classification solution would provide enormous benefits due to faster and more accurate quantitative results without human variability; researchers and medical professionals could better characterize stage-specific drug targets and better quantify patient reactions to drugs. Previous attempts to automate the process of identifying and quantifying malaria have not gained major traction partly due to difficulty of replication, comparison, and extension. Authors also rarely make their image sets available, which precludes replication of results and assessment of potential improvements. The lack of a standard set of images nor standard set of metrics used to report results has impeded the field. ### Images Images are in .png or .jpg format. There are 3 sets of images consisting of 1364 images (~80,000 cells) with different researchers having prepared each one: from Brazil (Stefanie Lopes), from Southeast Asia (Benoit Malleret), and time course (Gabriel Rangel). Blood smears were stained with Giemsa reagent. ### Ground truth The data consists of two classes of uninfected cells (RBCs and leukocytes) and four classes of infected cells (gametocytes, rings, trophozoites, and schizonts). Annotators were permitted to mark some cells as difficult if not clearly in one of the cell classes. The data had a heavy imbalance towards uninfected RBCs versus uninfected leukocytes and infected cells, making up over 95% of all cells. A class label and set of bounding box coordinates were given for each cell. For all data sets, infected cells were given a class label by Stefanie Lopes, malaria researcher at the Dr. Heitor Vieira Dourado Tropical Medicine Foundation hospital, indicating stage of development or marked as difficult. ### For more information These images were contributed by Jane Hung of MIT and the Broad Institute in Cambridge, MA. https://i.imgur.com/1zrfx2Y.png</description>
<size>2259224287</size>
</item><item>
<title>Images of thin blood smears with bounding boxes around malaria parasites (malaria-655)</title>
<category>Dataset</category>
<infohash>baa7ef7e09a123c04c516d7226193423f4f2e5b3</infohash>
<guid>https://academictorrents.com/details/baa7ef7e09a123c04c516d7226193423f4f2e5b3</guid>
<link>https://academictorrents.com/details/baa7ef7e09a123c04c516d7226193423f4f2e5b3</link>
<description>655 images of thin smears with bounding boxes around parasites Tek FB, Dempster AG, Kale I, Parasite detection and identification for automated thin blood film malaria diagnosis. Computer Vision and Image Understanding 2010, 114:21-32. https://i.imgur.com/E30zLVQ.png</description>
<size>105447054</size>
</item><item>
<title>ISIC2017: Skin Lesion Analysis Towards Melanoma Detection</title>
<category>Dataset</category>
<infohash>152479c5e0b31c05c8fafbc23fcd5a20bf7f910b</infohash>
<guid>https://academictorrents.com/details/152479c5e0b31c05c8fafbc23fcd5a20bf7f910b</guid>
<link>https://academictorrents.com/details/152479c5e0b31c05c8fafbc23fcd5a20bf7f910b</link>
<description>The goal of the challenge is to help participants develop image analysis tools to enable the automated diagnosis of melanoma from dermoscopic images. Image analysis of skin lesions is composed of 3 parts: - Part 1: Lesion Segmentation - Part 2: Detection and Localization of Visual Dermoscopic Features/Patterns - Part 3: Disease Classification This challenge provides training data (~2000 images) for participants to engage in all 3 components of lesion image analysis. A separate public validation dataset (~150 images) and blind held-out test dataset (~600 images) will be provided for participants to generate and submit automated results. ## Background ### Melanoma Skin cancer is a major public health problem, with over 5 million newly diagnosed cases in the United States each year. Melanoma is the deadliest form of skin cancer, responsible for over 9,000 deaths each year. ### Dermoscopy As pigmented lesions occurring on the surface of the skin, melanoma is amenable to early detection by expert visual inspection. It is also amenable to automated detection with image analysis. Given the widespread availability of high-resolution cameras, algorithms that can improve our ability to screen and detect troublesome lesions can be of great value. As a result, many centers have begun their own research efforts on automated analysis. However, a centralized, coordinated, and comparative effort across institutions has yet to be implemented. Dermoscopy is an imaging technique that eliminates the surface reflection of skin. By removing surface reflection, visualization of deeper levels of skin is enhanced. Prior research has shown that when used by expert dermatologists, dermoscopy provides improved diagnostic accuracy, in comparison to standard photography. As inexpensive consumer dermatoscope attachments for smart phones are beginning to reach the market, the opportunity for automated dermoscopic assessment algorithms to positively influence patient care increases. https://i.imgur.com/daTTwFV.png ## Citation: Codella N, Gutman D, Celebi ME, Helba B, Marchetti MA, Dusza S, Kalloo A, Liopyris K, Mishra N, Kittler H, Halpern A. "Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC)". arXiv: 1710.05006 [cs.CV] Available: https://arxiv.org/abs/1710.05006</description>
<size>13006801360</size>
</item><item>
<title>ISIC2018: Skin Lesion Analysis Towards Melanoma Detection</title>
<category>Dataset</category>
<infohash>1e3811b66f1129a2b86b7c291316db8583dbc94f</infohash>
<guid>https://academictorrents.com/details/1e3811b66f1129a2b86b7c291316db8583dbc94f</guid>
<link>https://academictorrents.com/details/1e3811b66f1129a2b86b7c291316db8583dbc94f</link>
<description>This challenge is broken into three separate tasks: - Task 1: Lesion Segmentation - Task 2: Lesion Attribute Detection - Task 3: Disease Classification https://i.imgur.com/daTTwFV.png When using the ISIC 2018 datasets in your research, please cite the following works: [1] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, Allan Halpern: “Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)”, 2018; https://arxiv.org/abs/1902.03368 [2] Tschandl, P., Rosendahl, C. &amp; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5, 180161 doi:10.1038/sdata.2018.161 (2018).</description>
<size>17082696680</size>
</item><item>
<title>FMA: A Dataset For Music Analysis</title>
<category>Dataset</category>
<infohash>dba20c45d4d6fa6453a4e99d2f8a4817893cfb94</infohash>
<guid>https://academictorrents.com/details/dba20c45d4d6fa6453a4e99d2f8a4817893cfb94</guid>
<link>https://academictorrents.com/details/dba20c45d4d6fa6453a4e99d2f8a4817893cfb94</link>
<description>We introduce the Free Music Archive (FMA), an open and easily accessible dataset suitable for evaluating several tasks in MIR, a field concerned with browsing, searching, and organizing large music collections. The community s growing interest in feature and end-to-end learning is however restrained by the limited availability of large audio datasets. The FMA aims to overcome this hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. We here describe the dataset and how it was created, propose a train/validation/test split and three subsets, discuss some suitable MIR tasks, and evaluate some baselines for genre recognition. Code, data, and usage examples are available at https://github.com/mdeff/fma.</description>
<size>1080518251044</size>
</item><item>
<title>APTOS 2019 diabetic retinopathy dataset</title>
<category>Dataset</category>
<infohash>d8653db45e7f111dc2c1b595bdac7ccf695efcfd</infohash>
<guid>https://academictorrents.com/details/d8653db45e7f111dc2c1b595bdac7ccf695efcfd</guid>
<link>https://academictorrents.com/details/d8653db45e7f111dc2c1b595bdac7ccf695efcfd</link>
<description>You are provided with a large set of retina images taken using fundus photography under a variety of imaging conditions. A clinician has rated each image for the severity of diabetic retinopathy on a scale of 0 to 4.     0 - No DR 1 - Mild 2 - Moderate 3 - Severe 4 - Proliferative DR     Train: 3662 images Test: 1928 images https://i.imgur.com/lbybV6q.png</description>
<size>10214776605</size>
</item><item>
<title>r/WritingPrompts, Text (2018)</title>
<category>Dataset</category>
<infohash>b4fa678ca4a330cf7078750b93eaefb1680a9053</infohash>
<guid>https://academictorrents.com/details/b4fa678ca4a330cf7078750b93eaefb1680a9053</guid>
<link>https://academictorrents.com/details/b4fa678ca4a330cf7078750b93eaefb1680a9053</link>
<description>r/WritingPrompts data, formatted for GPT-2 training.</description>
<size>87467308</size>
</item><item>
<title>ImageNet-ValidationSet.zip</title>
<category>Dataset</category>
<infohash>16c5dd6a172ac59e0f27d4b698e5399ea9d48160</infohash>
<guid>https://academictorrents.com/details/16c5dd6a172ac59e0f27d4b698e5399ea9d48160</guid>
<link>https://academictorrents.com/details/16c5dd6a172ac59e0f27d4b698e5399ea9d48160</link>
<description/>
<size>6672696406</size>
</item><item>
<title>Enabling Factorized Piano Music Modeling and Generation with the {MAESTRO} Dataset</title>
<category>Dataset</category>
<infohash>defa6184c98663c94de97cb7e0952a54677e4aac</infohash>
<guid>https://academictorrents.com/details/defa6184c98663c94de97cb7e0952a54677e4aac</guid>
<link>https://academictorrents.com/details/defa6184c98663c94de97cb7e0952a54677e4aac</link>
<description>MAESTRO (MIDI and Audio Edited for Synchronous TRacks and Organization) is a dataset composed of over 200 hours of virtuosic piano performances captured with fine alignment (~3 ms) between note labels and audio waveforms.</description>
<size>109945081707</size>
</item><item>
<title>2D ultrasound sequences of the liver (mp4)</title>
<category>Dataset</category>
<infohash>4d107e9fd4b00fa797504d6cd0131744c9f31e81</infohash>
<guid>https://academictorrents.com/details/4d107e9fd4b00fa797504d6cd0131744c9f31e81</guid>
<link>https://academictorrents.com/details/4d107e9fd4b00fa797504d6cd0131744c9f31e81</link>
<description>7 2D ultrasound sequences of the liver of healthy volunteers were acquired during free breathing over a period of 5-10 min. This is a converted version of the usliverseq dataset into mp4 files encoded with H.264 to reduce size and make the files easier to read. The conversion commands used are:</description>
<size>107394613</size>
</item><item>
<title>01QZP 2018-2019 Ambient Intelligence</title>
<category>Course</category>
<infohash>9bbe28468af204ccefe75662cd184ce0abed0ad4</infohash>
<guid>https://academictorrents.com/details/9bbe28468af204ccefe75662cd184ce0abed0ad4</guid>
<link>https://academictorrents.com/details/9bbe28468af204ccefe75662cd184ce0abed0ad4</link>
<description>Lectures of Ambient Intelligence at Politecnico di Torino, in 2019. Topics: * Introduction to Ambient Intelligence: definitions and available approaches for smart homes, smart buildings, etc. Overview of application areas (home, building, city, traffic, etc.) and types of applications (monitoring, comfort, anomaly detection, ambient assisted living, control and automation, etc.) * Requirements and design methodology for AmI. Design, analysis and specification of requirements and functionalities related to user interacting with AmI settings. * Practical programming of AmI systems: the Python language, the Raspberry Pi computer, Web protocols and languages (e.g., HTTP and REST), web-based APIs, and collaboration tools (git, GitHub).</description>
<size>4378944634</size>
</item><item>
<title>03FYZ 2018-2019 Techiche di Programmazione (ITA)</title>
<category>Course</category>
<infohash>829147695b59dfa6a63676493525c350e0968ae8</infohash>
<guid>https://academictorrents.com/details/829147695b59dfa6a63676493525c350e0968ae8</guid>
<link>https://academictorrents.com/details/829147695b59dfa6a63676493525c350e0968ae8</link>
<description>Video Lezioni corso di Tecniche di Programmazione, Politecnico di Torino, anno 2019 (in Italian) Video-Lezioni per il corso di Tecniche di Programmazione, tenutosi al Politecnico di Torino nell Anno Accademico 2018/2019. Docenti del corso: Fulvio Corno, Andrea Marcelli, Alberto Monge Roffarello Informazioni sul corso: - pagina ufficiale del corso: http://bit.ly/tecn-progr - materiale didattico: https://github.com/TdP-2019/materiale - esercizi e laboratori: https://github.com/TdP-2019 - temi d esame: https://github.com/TdP-esami Queste video-lezioni sono disponibili anche come playlist su YouTube:</description>
<size>12199070376</size>
</item><item>
<title>DRIMDB (Diabetic Retinopathy Images Database) Database for Quality Testing of Retinal Images</title>
<category>Dataset</category>
<infohash>99811ba62918f8e73791d21be29dcc372d660305</infohash>
<guid>https://academictorrents.com/details/99811ba62918f8e73791d21be29dcc372d660305</guid>
<link>https://academictorrents.com/details/99811ba62918f8e73791d21be29dcc372d660305</link>
<description>Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores. Good: https://i.imgur.com/D5unNKs.png Bad: https://i.imgur.com/slFzaCZ.png Outlier: https://i.imgur.com/eG4PDet.png</description>
<size>17074713</size>
</item><item>
<title>DiaRetDB1 V2.1 - Diabetic Retinopathy Database</title>
<category>Dataset</category>
<infohash>817b91fd639263f6f644de4ccc9575c20b005c6c</infohash>
<guid>https://academictorrents.com/details/817b91fd639263f6f644de4ccc9575c20b005c6c</guid>
<link>https://academictorrents.com/details/817b91fd639263f6f644de4ccc9575c20b005c6c</link>
<description>The DiaRetDB1 is a public database for evaluating and benchmarking diabetic retinopathy detection algorithms. The database contains digital images of eye fundus and expert annotated ground truth for several well-known diabetic fundus lesions (hard exudates, soft exudates, microaneurysms and hemorrhages). The original images and the raw ground truth are both available. In addition to the data we also provide Matlab functionality (M-files) to read data (XML-files), fuse data of several experts and to evaluate detection methods. This database is related to ImageRet project and the ground truth was collected using our ImgAnnoTool image annotation tool (contact Lasse Lensu for more information). For a more detailed description, see our documentation, please. ### Authors The following authors have significantly contributed to the actual work of establishing and collecting the data and implementing the methods for the database: Tomi Kauppi, Valentina Kalesnykiene, Iiris Sorri, Asta Raninen, Raija Voutilainen, Joni Kamarainen, Lasse Lensu and Hannu Uusitalo. 90 images https://i.imgur.com/Oy7GJSR.png</description>
<size>144096332</size>
</item><item>
<title>MS-Celeb-1M: {A} Dataset and Benchmark for Large-Scale Face Recognition</title>
<category>Dataset</category>
<infohash>9e67eb7cc23c9417f39778a8e06cca5e26196a97</infohash>
<guid>https://academictorrents.com/details/9e67eb7cc23c9417f39778a8e06cca5e26196a97</guid>
<link>https://academictorrents.com/details/9e67eb7cc23c9417f39778a8e06cca5e26196a97</link>
<description>In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.</description>
<size>246390693904</size>
</item><item>
<title>OpenWebText (Gokaslan's distribution, 2019), GPT-2 Tokenized</title>
<category>Dataset</category>
<infohash>36c39b25657ce1639ccec0a91cf242b42e1f01db</infohash>
<guid>https://academictorrents.com/details/36c39b25657ce1639ccec0a91cf242b42e1f01db</guid>
<link>https://academictorrents.com/details/36c39b25657ce1639ccec0a91cf242b42e1f01db</link>
<description>Code by eukaryote31 and Joshua Peterson: https://github.com/jcpeterson/openwebtext and https://github.com/eukaryote31/openwebtext Scraped by Aaron Gokaslan and Vanya Cohen: https://skylion007.github.io/OpenWebTextCorpus/ Tokenized by eukaryote31</description>
<size>16023403913</size>
</item><item>
<title>Verification Based Annotation for Visual Recognition</title>
<category>Dataset</category>
<infohash>e780e1a9e898e53e72c16cb5fcc6d61d90cc4d27</infohash>
<guid>https://academictorrents.com/details/e780e1a9e898e53e72c16cb5fcc6d61d90cc4d27</guid>
<link>https://academictorrents.com/details/e780e1a9e898e53e72c16cb5fcc6d61d90cc4d27</link>
<description/>
<size>7610852004</size>
</item><item>
<title>trees.tar.gz</title>
<category>Dataset</category>
<infohash>d8ceccf6d9a57b799003205e0567e630b0ecb90e</infohash>
<guid>https://academictorrents.com/details/d8ceccf6d9a57b799003205e0567e630b0ecb90e</guid>
<link>https://academictorrents.com/details/d8ceccf6d9a57b799003205e0567e630b0ecb90e</link>
<description>Applying deep learning to new domains usually implies a considerable data collection problem. We look at idea of how we can use a partially trained model as an aid to a human annotator. We do this by providing the partially trained model s prediction as a starting point for a human annotator to directly edit. This is demonstrated by applying our ideas to building a small segmentation dataset for labeling trees in a plantation. We also show that by starting with a pre-trained model and fine-tuning, we can provide a useful aid to a human annotator using very few input images.</description>
<size>59771842</size>
</item><item>
<title>MIT-BIH Arrhythmia Database</title>
<category>Dataset</category>
<infohash>78d14c9cb4fa765b3c323c1a26bd114e2b30ef34</infohash>
<guid>https://academictorrents.com/details/78d14c9cb4fa765b3c323c1a26bd114e2b30ef34</guid>
<link>https://academictorrents.com/details/78d14c9cb4fa765b3c323c1a26bd114e2b30ef34</link>
<description>Since 1975, our laboratories at Boston s Beth Israel Hospital (now the Beth Israel Deaconess Medical Center) and at MIT have supported our own research into arrhythmia analysis and related subjects. One of the first major products of that effort was the MIT-BIH Arrhythmia Database, which we completed and began distributing in 1980. The database was the first generally available set of standard test material for evaluation of arrhythmia detectors, and has been used for that purpose as well as for basic research into cardiac dynamics at more than 500 sites worldwide. Originally, we distributed the database on 9-track half-inch digital tape at 800 and 1600 bpi, and on quarter-inch IRIG-format FM analog tape. In August, 1989, we produced a CD-ROM version of the database. The MIT-BIH Arrhythmia Database contains 48 half-hour excerpts of two-channel ambulatory ECG recordings, obtained from 47 subjects studied by the BIH Arrhythmia Laboratory between 1975 and 1979. Twenty-three recordings were chosen at random from a set of 4000 24-hour ambulatory ECG recordings collected from a mixed population of inpatients (about 60%) and outpatients (about 40%) at Boston s Beth Israel Hospital; the remaining 25 recordings were selected from the same set to include less common but clinically significant arrhythmias that would not be well-represented in a small random sample. The recordings were digitized at 360 samples per second per channel with 11-bit resolution over a 10 mV range. Two or more cardiologists independently annotated each record; disagreements were resolved to obtain the computer-readable reference annotations for each beat (approximately 110,000 annotations in all) included with the database. This directory contains the entire MIT-BIH Arrhythmia Database. About half (25 of 48 complete records, and reference annotation files for all 48 records) of this database has been freely available here since PhysioNet s inception in September 1999. The 23 remaining signal files, which had been available only on the MIT-BIH Arrhythmia Database CD-ROM, were posted here in February 2005. Much more information about this database may be found in the MIT-BIH Arrhythmia Database Directory. ## Citation Moody GB, Mark RG. The impact of the MIT-BIH Arrhythmia Database. IEEE Eng in Med and Biol 20(3):45-50 (May-June 2001). (PMID: 11446209)</description>
<size>93861490</size>
</item><item>
<title>1000 Genomes Project</title>
<category>Dataset</category>
<infohash>648ded078fbdfec60ce1c30e7f699624f6b05c7a</infohash>
<guid>https://academictorrents.com/details/648ded078fbdfec60ce1c30e7f699624f6b05c7a</guid>
<link>https://academictorrents.com/details/648ded078fbdfec60ce1c30e7f699624f6b05c7a</link>
<description>Variant count files storing genetic variation across 1092 complete human genomes</description>
<size>17416339333</size>
</item><item>
<title>1000 Genomes Project</title>
<category>Dataset</category>
<infohash>c44077f7770ff53989927a6a3bb81c5e82624141</infohash>
<guid>https://academictorrents.com/details/c44077f7770ff53989927a6a3bb81c5e82624141</guid>
<link>https://academictorrents.com/details/c44077f7770ff53989927a6a3bb81c5e82624141</link>
<description>Variant count files containing information about SNPs, indels, and other variations, for chromosomes and mitochondria from 1092 different human genomes, generated by the 1000 Genome Project.</description>
<size>17416339333</size>
</item><item>
<title>CMU Graphics Lab Motion Capture Database Converted to FBX</title>
<category>Dataset</category>
<infohash>8e21416d1584981ef3e9d8a97ee4278f93390623</infohash>
<guid>https://academictorrents.com/details/8e21416d1584981ef3e9d8a97ee4278f93390623</guid>
<link>https://academictorrents.com/details/8e21416d1584981ef3e9d8a97ee4278f93390623</link>
<description>Collection of various motion capture recordings (walking, dancing, sports, and others) performed by over 140 subjects. The database contains free motions which you can download and use. The original dataset is delivered by the authors in the Acclaim format. This version of the dataset is a conversion to FBX based on the BVH conversion by B. Hahne with some fixes in T-Poses and framerates.</description>
<size>1917480550</size>
</item><item>
<title>flyingthings3d__one_sixteenth_baseline__opticalflow.tar.bz2</title>
<category>Dataset</category>
<infohash>e04f244538a23dbfc55e1012b4c718c4cb9cddc3</infohash>
<guid>https://academictorrents.com/details/e04f244538a23dbfc55e1012b4c718c4cb9cddc3</guid>
<link>https://academictorrents.com/details/e04f244538a23dbfc55e1012b4c718c4cb9cddc3</link>
<description>This torrent contains the "Optical Flow" data for a one-sixteenth-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>352740882095</size>
</item><item>
<title>flyingthings3d__one_sixteenth_baseline__finalpass.tar.bz2</title>
<category>Dataset</category>
<infohash>aebf0844ce8d7600ddf6c2abf41555980bc59d5f</infohash>
<guid>https://academictorrents.com/details/aebf0844ce8d7600ddf6c2abf41555980bc59d5f</guid>
<link>https://academictorrents.com/details/aebf0844ce8d7600ddf6c2abf41555980bc59d5f</link>
<description>This torrent contains the "Finalpass" image data for a one-sixteenth-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>47320142248</size>
</item><item>
<title>flyingthings3d__one_sixteenth_baseline__disparitychange.tar.bz2</title>
<category>Dataset</category>
<infohash>9c31634dd91d8df5632655ba28acd8a979d368ed</infohash>
<guid>https://academictorrents.com/details/9c31634dd91d8df5632655ba28acd8a979d368ed</guid>
<link>https://academictorrents.com/details/9c31634dd91d8df5632655ba28acd8a979d368ed</link>
<description>This torrent contains the "Disparity Change" data for a one-sixteenth-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>131667761926</size>
</item><item>
<title>flyingthings3d__one_sixteenth_baseline__disparity.tar.bz2</title>
<category>Dataset</category>
<infohash>93fbe3746affea07b9767c27c398f3d4842880e9</infohash>
<guid>https://academictorrents.com/details/93fbe3746affea07b9767c27c398f3d4842880e9</guid>
<link>https://academictorrents.com/details/93fbe3746affea07b9767c27c398f3d4842880e9</link>
<description>This torrent contains the "Disparity" data for a one-sixteenth-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>98609734524</size>
</item><item>
<title>flyingthings3d__one_sixteenth_baseline__cleanpass.tar.bz2</title>
<category>Dataset</category>
<infohash>de4602c3f53ba86d1542a48645e940298174d3cf</infohash>
<guid>https://academictorrents.com/details/de4602c3f53ba86d1542a48645e940298174d3cf</guid>
<link>https://academictorrents.com/details/de4602c3f53ba86d1542a48645e940298174d3cf</link>
<description>This torrent contains the "Cleanpass" image data for a one-sixteenth-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>41071552590</size>
</item><item>
<title>flyingthings3d__one_quarter_baseline__opticalflow.tar.bz2</title>
<category>Dataset</category>
<infohash>1369193346685e7dff8733e097e534944c8b8f45</infohash>
<guid>https://academictorrents.com/details/1369193346685e7dff8733e097e534944c8b8f45</guid>
<link>https://academictorrents.com/details/1369193346685e7dff8733e097e534944c8b8f45</link>
<description>This torrent contains the "Optical Flow" data for a one-quarter-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>348350298245</size>
</item><item>
<title>flyingthings3d__one_quarter_baseline__finalpass.tar.bz2</title>
<category>Dataset</category>
<infohash>647c57247f1f836b1564956acbf79a9506a64461</infohash>
<guid>https://academictorrents.com/details/647c57247f1f836b1564956acbf79a9506a64461</guid>
<link>https://academictorrents.com/details/647c57247f1f836b1564956acbf79a9506a64461</link>
<description>This torrent contains the "Finalpass" image data for a one-quarter-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>46747868760</size>
</item><item>
<title>flyingthings3d__one_quarter_baseline__disparitychange.tar.bz2</title>
<category>Dataset</category>
<infohash>9ab532b068d4fc95dc1fe159a50928cab77b4e76</infohash>
<guid>https://academictorrents.com/details/9ab532b068d4fc95dc1fe159a50928cab77b4e76</guid>
<link>https://academictorrents.com/details/9ab532b068d4fc95dc1fe159a50928cab77b4e76</link>
<description>This torrent contains the "Disparity Change" data for a one-quarter-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>130119970220</size>
</item><item>
<title>flyingthings3d__one_quarter_baseline__disparity.tar.bz2</title>
<category>Dataset</category>
<infohash>eb7257a548ce8296c1cee8274463c977344b9540</infohash>
<guid>https://academictorrents.com/details/eb7257a548ce8296c1cee8274463c977344b9540</guid>
<link>https://academictorrents.com/details/eb7257a548ce8296c1cee8274463c977344b9540</link>
<description>This torrent contains the "Disparity" data for a one-quarter-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>97421826079</size>
</item><item>
<title>flyingthings3d__one_quarter_baseline__cleanpass.tar.bz2</title>
<category>Dataset</category>
<infohash>231f45983250b25fb249c619440885b83d63d192</infohash>
<guid>https://academictorrents.com/details/231f45983250b25fb249c619440885b83d63d192</guid>
<link>https://academictorrents.com/details/231f45983250b25fb249c619440885b83d63d192</link>
<description>This torrent contains the "Cleanpass" image data for a one-quarter-baseline version of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>40577248089</size>
</item><item>
<title>OpenWebText-urls-26M-filtered.xz</title>
<category>Dataset</category>
<infohash>f5161721b322bca66ed74da32b963c1066e64312</infohash>
<guid>https://academictorrents.com/details/f5161721b322bca66ed74da32b963c1066e64312</guid>
<link>https://academictorrents.com/details/f5161721b322bca66ed74da32b963c1066e64312</link>
<description>Every outbound reddit link from before 31. Dec 2018 with at least 3 karma. The list is filtered to remove image sites, non-scraper-friendly sites, and other media files.</description>
<size>480280068</size>
</item><item>
<title>All of OAPEN</title>
<category>Dataset</category>
<infohash>2f81a679199849d9297b0b2b1195b8e7beaf6637</infohash>
<guid>https://academictorrents.com/details/2f81a679199849d9297b0b2b1195b8e7beaf6637</guid>
<link>https://academictorrents.com/details/2f81a679199849d9297b0b2b1195b8e7beaf6637</link>
<description>The OAPEN Library contains freely accessible academic books, mainly in the area of humanities and social sciences. OAPEN works with publishers to build a quality controlled collection of open access books, and provides services for publishers, libraries and research funders in the areas of deposit, quality assurance, dissemination, and digital preservation. OAPEN enables libraries and aggregators to use the metadata of all available titles of the OAPEN Library. The metadata is available in the undermentioned formats and procedures. The zip file contains all of the PDFs available from OAPEN at the time of upload (2019-05-07). Further information available at: http://www.oapen.org/content/metadata</description>
<size>92579515768</size>
</item><item>
<title>Robust Global Translations with 1DSfM</title>
<category>Dataset</category>
<infohash>9fba8b6d6323a8eb66b0fee0886f134a16625eef</infohash>
<guid>https://academictorrents.com/details/9fba8b6d6323a8eb66b0fee0886f134a16625eef</guid>
<link>https://academictorrents.com/details/9fba8b6d6323a8eb66b0fee0886f134a16625eef</link>
<description>We present a simple, effective method for solving structure from motion problems by averaging epipolar geometries. Based on recent successes in solving for global camera rotations using averaging schemes, we focus on the problem of solving for 3D camera translations given a network of noisy pairwise camera translation directions (or 3D point observations). To do this well, we have two main insights. First, we propose a method for removing outliers from problem instances by solving simpler low-dimensional subproblems, which we refer to as 1DSfM problems. Second, we present a simple, principled averaging scheme. We demonstrate this new method in the wild on Internet photo collections.</description>
<size>2089711911</size>
</item><item>
<title>The Cityscapes Dataset for Semantic Urban Scene Understanding</title>
<category>Dataset</category>
<infohash>4f76b97fbb851fac002dcc55dcc55883e9728db7</infohash>
<guid>https://academictorrents.com/details/4f76b97fbb851fac002dcc55dcc55883e9728db7</guid>
<link>https://academictorrents.com/details/4f76b97fbb851fac002dcc55dcc55883e9728db7</link>
<description>The Cityscapes Dataset focuses on semantic understanding of urban street scenes. In the following, we give an overview on the design choices that were made to target the dataset’s focus.</description>
<size>78898813019</size>
</item><item>
<title>All of Hindawi </title>
<category>Dataset</category>
<infohash>b8c887df49fffde97419f3d03382fa81da1c6f88</infohash>
<guid>https://academictorrents.com/details/b8c887df49fffde97419f3d03382fa81da1c6f88</guid>
<link>https://academictorrents.com/details/b8c887df49fffde97419f3d03382fa81da1c6f88</link>
<description>Hindawi XML Corpus Download In order to facilitate the use of Hindawi’s content for data mining purposes, Hindawi makes its full corpus of XML content available for download as a single .zip file. This .zip file is organized using a two-level folder structure, first by publication year, then by journal. For example, the folder called "2011" contains subfolders for any journal that has one or more published articles in 2011, and inside each of these folders are individual XML files for these articles. In addition, the downloaded .zip file contains an XML file called contents.xml, which provides an overview of all of the subfolders that exist within the main .zip file. The content of this .zip file is updated on a daily basis, and the XML files contained within this corpus download adhere to the NLM DTD. If you have questions about Hindawi’s XML corpus download please contact help@hindawi.com. Source: https://www.hindawi.com/corpus/ List of Hindawi journals: https://www.hindawi.com/journals/</description>
<size>4836032512</size>
</item><item>
<title>All of PLOS</title>
<category>Dataset</category>
<infohash>d53cdac872766c455a737e64663790f1b8cdda43</infohash>
<guid>https://academictorrents.com/details/d53cdac872766c455a737e64663790f1b8cdda43</guid>
<link>https://academictorrents.com/details/d53cdac872766c455a737e64663790f1b8cdda43</link>
<description>Technical details: This zip file contains JATS-standard XML content of every PLOS article, including all Articles and Front Matter. It does not include Figures or Supplemental Data. It’s just under five GB in size, and is updated every day with new articles. We also make our articles available through PubMed Central and our API. Source: https://www.plos.org/text-and-data-mining</description>
<size>5632950272</size>
</item><item>
<title>Stanford Drone Dataset</title>
<category>Dataset</category>
<infohash>01f95ea32e160e6c251ea55a87bd5a24b23cb03d</infohash>
<guid>https://academictorrents.com/details/01f95ea32e160e6c251ea55a87bd5a24b23cb03d</guid>
<link>https://academictorrents.com/details/01f95ea32e160e6c251ea55a87bd5a24b23cb03d</link>
<description>When humans navigate a crowed space such as a university campus or the sidewalks of a busy street, they follow common sense rules based on social etiquette. In order to enable the design of new algorithms that can fully take advantage of these rules to better solve tasks such as target tracking or trajectory forecasting, we need to have access to better data. To that end, we contribute the very first large scale dataset (to the best of our knowledge) that collects images and videos of various types of agents (not just pedestrians, but also bicyclists, skateboarders, cars, buses, and golf carts) that navigate in a real world outdoor environment such as a university campus. In the above images, pedestrians are labeled in pink, bicyclists in red, skateboarders in orange, and cars in green. https://i.imgur.com/iJl5sUN.png https://i.imgur.com/XOBHAoE.png https://i.imgur.com/MDruCEV.png https://i.imgur.com/cYpHgG5.png ### CITATION If you find this dataset useful, please cite this paper (and refer the data as Stanford Drone Dataset or SDD): A. Robicquet, A. Sadeghian, A. Alahi, S. Savarese, Learning Social Etiquette: Human Trajectory Prediction In Crowded Scenes in European Conference on Computer Vision (ECCV), 2016.</description>
<size>71002113639</size>
</item><item>
<title>Inria Aerial Image Labeling Dataset</title>
<category>Dataset</category>
<infohash>cf445f6073540af0803ee345f46294f088e7bba5</infohash>
<guid>https://academictorrents.com/details/cf445f6073540af0803ee345f46294f088e7bba5</guid>
<link>https://academictorrents.com/details/cf445f6073540af0803ee345f46294f088e7bba5</link>
<description>The Inria Aerial Image Labeling addresses a core topic in remote sensing: the automatic pixelwise labeling of aerial imagery. Dataset features: Coverage of 810 km² (405 km² for training and 405 km² for testing) Aerial orthorectified color imagery with a spatial resolution of 0.3 m Ground truth data for two semantic classes: building and not building (publicly disclosed only for the training subset) The images cover dissimilar urban settlements, ranging from densely populated areas (e.g., San Francisco’s financial district) to alpine towns (e.g,. Lienz in Austrian Tyrol). Instead of splitting adjacent portions of the same images into the training and test subsets, different cities are included in each of the subsets. For example, images over Chicago are included in the training set (and not on the test set) and images over San Francisco are included on the test set (and not on the training set). The ultimate goal of this dataset is to assess the generalization power of the techniques: while Chicago imagery may be used for training, the system should label aerial images over other regions, with varying illumination conditions, urban landscape and time of the year. The dataset was constructed by combining public domain imagery and public domain official building footprints. https://i.imgur.com/wAL5IUX.png Citation Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat and Pierre Alliez. “Can Semantic Labeling Methods Generalize to Any City? The Inria Aerial Image Labeling Benchmark”. IEEE International Geoscience and Remote Sensing Symposium (IGARSS). 2017.</description>
<size>20957265875</size>
</item><item>
<title>PROSTATEx</title>
<category>Dataset</category>
<infohash>5a447ff50062194bd58dd11c0fedead59e6d873c</infohash>
<guid>https://academictorrents.com/details/5a447ff50062194bd58dd11c0fedead59e6d873c</guid>
<link>https://academictorrents.com/details/5a447ff50062194bd58dd11c0fedead59e6d873c</link>
<description>This collection is a retrospective set of prostate MR studies. All studies included T2-weighted (T2W), proton density-weighted (PD-W), dynamic contrast enhanced (DCE), and diffusion-weighted (DW) imaging. The images were acquired on two different types of Siemens 3T MR scanners, the MAGNETOM Trio and Skyra. T2-weighted images were acquired using a turbo spin echo sequence and had a resolution of around 0.5 mm in plane and a slice thickness of 3.6 mm. The DCE time series was acquired using a 3-D turbo flash gradient echo sequence with a resolution of around 1.5 mm in-plane, a slice thickness of 4 mm and a temporal resolution of 3.5 s. The proton density weighted image was acquired prior to the DCE time series using the same sequence with different echo and repetition times and a different flip angle. Finally, the DWI series were acquired with a single-shot echo planar imaging sequence with a resolution of 2 mm in-plane and 3.6 mm slice thickness and with diffusion-encoding gradients in three directions. Three b-values were acquired (50, 400, and 800), and subsequently, the ADC map was calculated by the scanner software. All images were acquired without an endorectal coil. https://i.imgur.com/dh121Ur.png ## Citation G. Litjens, O. Debats, J. Barentsz, N. Karssemeijer and H. Huisman. "Computer-aided detection of prostate cancer in MRI", IEEE Transactions on Medical Imaging 2014;33:1083-1092.</description>
<size>4324268308</size>
</item><item>
<title>Head-Neck-CT</title>
<category>Dataset</category>
<infohash>d06aafd957f0c8c9b0eb4636e5c3ebdb7bdaf54f</infohash>
<guid>https://academictorrents.com/details/d06aafd957f0c8c9b0eb4636e5c3ebdb7bdaf54f</guid>
<link>https://academictorrents.com/details/d06aafd957f0c8c9b0eb4636e5c3ebdb7bdaf54f</link>
<description>https://i.imgur.com/4jYnRqK.png This is a subset of just the CT scans from the original dataset. "This collection contains FDG-PET/CT and radiotherapy planning CT imaging data of 298 patients from four different institutions in Québec with histologically proven head-and-neck cancer (H&amp;N) All patients had pre-treatment FDG-PET/CT scans between April 2006 and November 2014, and within a median of 18 days (range: 6-66) before treatment Dates in the TCIA images have been changed in the interest of de-identification; the same change was applied across all images, preserving the time intervals between serial scans." These patients were all part of a study described in further detail (treatment, image scanning protocols, etc.) in the publication: ## Publication Citation Vallières, M. et al. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci Rep 7, 10117 (2017). doi: 10.1038/s41598-017-10371-5</description>
<size>22836341441</size>
</item><item>
<title>EPIC-KITCHENS 2018</title>
<category>Dataset</category>
<infohash>d08f4591d1865bbe3436d1eb25ed55aae8b8f043</infohash>
<guid>https://academictorrents.com/details/d08f4591d1865bbe3436d1eb25ed55aae8b8f043</guid>
<link>https://academictorrents.com/details/d08f4591d1865bbe3436d1eb25ed55aae8b8f043</link>
<description>The largest dataset in egocentric vision to-date. Full details on: http://epic-kitchens.github.io</description>
<size>1148073796667</size>
</item><item>
<title>UCF Google Street View Dataset 2014</title>
<category>Dataset</category>
<infohash>e52a8978af7c2f734f2b30795075dbcd50efc983</infohash>
<guid>https://academictorrents.com/details/e52a8978af7c2f734f2b30795075dbcd50efc983</guid>
<link>https://academictorrents.com/details/e52a8978af7c2f734f2b30795075dbcd50efc983</link>
<description>https://i.imgur.com/MjhbQgK.png The dataset contains 62,058 high quality Google Street View images. The images cover the downtown and neighboring areas of Pittsburgh, PA; Orlando, FL and partially Manhattan, NY. Accurate GPS coordinates of the images and their compass direction are provided as well. For each Street View placemark (i.e. each spot on one street), the 360° spherical view is broken down into 4 side views and 1 upward view. There is one additional image per placemark which shows some overlaid markers, such as the address, name of streets, etc. ### Citation: Please cite the following paper for which this data was collected (partially): Image Geo-localization based on Multiple Nearest Neighbor Feature Matching using Generalized Graphs. Amir Roshan Zamir and Mubarak Shah. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2014.</description>
<size>46247776646</size>
</item><item>
<title>PADCHEST_SJ (Feb 2019 Update)</title>
<category>Dataset</category>
<infohash>dec12db21d57e158f78621f06dcbe78248d14850</infohash>
<guid>https://academictorrents.com/details/dec12db21d57e158f78621f06dcbe78248d14850</guid>
<link>https://academictorrents.com/details/dec12db21d57e158f78621f06dcbe78248d14850</link>
<description>This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. https://i.imgur.com/MpVlYgB.png</description>
<size>1127527483252</size>
</item><item>
<title>A browserless architecture for extracting web prices</title>
<category>Paper</category>
<infohash>ed3aaa1eab59d612c4de2275c0e3c2d2f0d2ac05</infohash>
<guid>https://academictorrents.com/details/ed3aaa1eab59d612c4de2275c0e3c2d2f0d2ac05</guid>
<link>https://academictorrents.com/details/ed3aaa1eab59d612c4de2275c0e3c2d2f0d2ac05</link>
<description/>
<size>16066532</size>
</item><item>
<title>Lung CT Segmentation Challenge 2017 (LCTSC)</title>
<category>Dataset</category>
<infohash>0a3611528c9172383656cb1b6a07cfb7f095eb82</infohash>
<guid>https://academictorrents.com/details/0a3611528c9172383656cb1b6a07cfb7f095eb82</guid>
<link>https://academictorrents.com/details/0a3611528c9172383656cb1b6a07cfb7f095eb82</link>
<description>Average 4DCT or free-breathing (FB) CT images from 60 patients, depending on clinical practice, are used for this challenge. Data were acquired from 3 institutions (20 each). Datasets were divided into three groups, stratified per institution: 36 training datasets 12 off-site test datasets 12 live test datasets https://i.imgur.com/CzjcFRj.png |Collection Statistics| | |&amp;mdash;- |&amp;mdash;- | |Image Size (GB)|4.8| |Modalities|CT, RT| |Number of Images|9569| |Number of Patients|60| |Number of Series|96| |Number of Studies|60|</description>
<size>5108773000</size>
</item><item>
<title>book_library</title>
<category>Paper</category>
<infohash>57ca3daaaac46400731e61ed80d130b273001c6f</infohash>
<guid>https://academictorrents.com/details/57ca3daaaac46400731e61ed80d130b273001c6f</guid>
<link>https://academictorrents.com/details/57ca3daaaac46400731e61ed80d130b273001c6f</link>
<description>textbook_torrents_collection_from_book_noon</description>
<size>2368760109</size>
</item><item>
<title>gdelt_urls</title>
<category>Paper</category>
<infohash>f82411dd4918bd4d31bde1af380cea4913860c80</infohash>
<guid>https://academictorrents.com/details/f82411dd4918bd4d31bde1af380cea4913860c80</guid>
<link>https://academictorrents.com/details/f82411dd4918bd4d31bde1af380cea4913860c80</link>
<description>37 GB of urls from GDELT (https://www.gdeltproject.org/). Downloaded from Google BigQuery</description>
<size>37476616677</size>
</item><item>
<title>Flickr8k Dataset</title>
<category>Dataset</category>
<infohash>9dea07ba660a722ae1008c4c8afdd303b6f6e53b</infohash>
<guid>https://academictorrents.com/details/9dea07ba660a722ae1008c4c8afdd303b6f6e53b</guid>
<link>https://academictorrents.com/details/9dea07ba660a722ae1008c4c8afdd303b6f6e53b</link>
<description>8,000 photos and up to 5 captions for each photo. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. … The images were chosen from six different Flickr groups, and tend not to contain any well-known people or locations, but were manually selected to depict a variety of scenes and situations https://i.imgur.com/6RxAndT.png ## Citation Hodosh, Micah, Peter Young, and Julia Hockenmaier. "Framing image description as a ranking task: Data, models and evaluation metrics." Journal of Artificial Intelligence Research 47 (2013): 853-899.</description>
<size>1117760547</size>
</item><item>
<title>SPI_val.tar.gz</title>
<category>Dataset</category>
<infohash>f0977a00ca9d61eefdfa232515ac6690d3b56fc5</infohash>
<guid>https://academictorrents.com/details/f0977a00ca9d61eefdfa232515ac6690d3b56fc5</guid>
<link>https://academictorrents.com/details/f0977a00ca9d61eefdfa232515ac6690d3b56fc5</link>
<description/>
<size>887999327</size>
</item><item>
<title>SPI_train.tar.gz</title>
<category>Dataset</category>
<infohash>8649f33cf8d661cbf3290bd4216c6ac637b777e0</infohash>
<guid>https://academictorrents.com/details/8649f33cf8d661cbf3290bd4216c6ac637b777e0</guid>
<link>https://academictorrents.com/details/8649f33cf8d661cbf3290bd4216c6ac637b777e0</link>
<description/>
<size>34892707597</size>
</item><item>
<title>SPI_eval.tar.gz</title>
<category>Dataset</category>
<infohash>0a3344160f3bb1b6a82f2552cf8503d25e1b6b48</infohash>
<guid>https://academictorrents.com/details/0a3344160f3bb1b6a82f2552cf8503d25e1b6b48</guid>
<link>https://academictorrents.com/details/0a3344160f3bb1b6a82f2552cf8503d25e1b6b48</link>
<description/>
<size>5446714283</size>
</item><item>
<title>ImageNet Large Scale Visual Recognition Challenge (V2017)</title>
<category>Dataset</category>
<infohash>943977d8c96892d24237638335e481f3ccd54cfb</infohash>
<guid>https://academictorrents.com/details/943977d8c96892d24237638335e481f3ccd54cfb</guid>
<link>https://academictorrents.com/details/943977d8c96892d24237638335e481f3ccd54cfb</link>
<description/>
<size>166022728827</size>
</item><item>
<title>LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop (V2017)</title>
<category>Dataset</category>
<infohash>c53c374bd6de76da7fe76ed5c9e3c7c6c691c489</infohash>
<guid>https://academictorrents.com/details/c53c374bd6de76da7fe76ed5c9e3c7c6c691c489</guid>
<link>https://academictorrents.com/details/c53c374bd6de76da7fe76ed5c9e3c7c6c691c489</link>
<description>LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop</description>
<size>168089472771</size>
</item><item>
<title>WebText dataset urls.txt.tar.gz</title>
<category>Dataset</category>
<infohash>15f3494b2991e75194d3af72bf7afa5025a7abc3</infohash>
<guid>https://academictorrents.com/details/15f3494b2991e75194d3af72bf7afa5025a7abc3</guid>
<link>https://academictorrents.com/details/15f3494b2991e75194d3af72bf7afa5025a7abc3</link>
<description>Collection of URLs hosting content used in the WebText dataset described by OpenAI here: https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf URLs obtained with the scripts by eukaryote31</description>
<size>1754756585</size>
</item><item>
<title>Invasive Ductal Carcinoma (IDC) Histology Image Dataset</title>
<category>Dataset</category>
<infohash>e40bd59ab08861329ce3c418be191651f35e2ffa</infohash>
<guid>https://academictorrents.com/details/e40bd59ab08861329ce3c418be191651f35e2ffa</guid>
<link>https://academictorrents.com/details/e40bd59ab08861329ce3c418be191651f35e2ffa</link>
<description>Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. To assign an aggressiveness grade to a whole mount sample, pathologists typically focus on the regions which contain the IDC. As a result, one of the common pre-processing steps for automatic aggressiveness grading is to delineate the exact regions of IDC inside of a whole mount slide. Dataset Description The original dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. From that, 277,524 patches of size 50  x 50 were extracted (198,738 IDC negative and 78,786 IDC positive). Each patch’s file name is of the format:     u_xX_yY_classC.png   — &gt; example 10253_idx5_x1351_y1101_class0.png     Where u is the patient ID (10253_idx5), X is the x-coordinate of where this patch was cropped from, Y is the y-coordinate of where this patch was cropped from, and C indicates the class where 0 is non-IDC and 1 is IDC. https://i.imgur.com/sDQrEp2.png</description>
<size>1644892042</size>
</item><item>
<title>CMU 11-785 Introduction to Deep Learning Spring 2019</title>
<category>Course</category>
<infohash>9bd4c5afe4f80a0b0e519923ff1eb5c76e5c07a9</infohash>
<guid>https://academictorrents.com/details/9bd4c5afe4f80a0b0e519923ff1eb5c76e5c07a9</guid>
<link>https://academictorrents.com/details/9bd4c5afe4f80a0b0e519923ff1eb5c76e5c07a9</link>
<description>“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market. In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.</description>
<size>3273360608</size>
</item><item>
<title>HiSeqV2.gz</title>
<category>Dataset</category>
<infohash>e4081b995625f9fc599ad860138acf7b6eb1cf6f</infohash>
<guid>https://academictorrents.com/details/e4081b995625f9fc599ad860138acf7b6eb1cf6f</guid>
<link>https://academictorrents.com/details/e4081b995625f9fc599ad860138acf7b6eb1cf6f</link>
<description>TCGA pan-cancer gene expression by RNAseq, compiled using data from all TCGA cohorts. Gene expression was measured using the IlluminaHiSeq technology. Data from all TCGA cohorts are combined to produce this dataset. Values are log2(x+1) transformed RSEM values. version 2016-08-16</description>
<size>513041354</size>
</item><item>
<title>Shenzhen Hospital X-ray Set</title>
<category>Dataset</category>
<infohash>462728e890bd37c05e9439c885df7afc36209cc8</infohash>
<guid>https://academictorrents.com/details/462728e890bd37c05e9439c885df7afc36209cc8</guid>
<link>https://academictorrents.com/details/462728e890bd37c05e9439c885df7afc36209cc8</link>
<description>X-ray images in this data set have been collected by Shenzhen No.3 Hospital in Shenzhen, Guangdong providence, China. The x-rays were acquired as part of the routine care at Shenzhen Hospital. The set contains images in JPEG format. There are 340 normal x-rays and 275 abnormal x-rays showing various manifestations of tuberculosis. Examples images: https://i.imgur.com/e3fPm3a.png Example clinical reports:     male 24yrs PTB in the left lower field         female 32yrs Bilateral secondary PTB , right pleural change after decortication         female 16yrs PTB in the left upper field     ## Citation Candemir S, Jaeger S, Musco J, Xue Z, Karargyris A, Antani SK, Thoma GR, Palaniappan K. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans Med Imaging. 2014 Feb;33(2):577-90. doi: 10.1109/TMI.2013.2290491. PMID: 24239990 Jaeger S, Karargyris A, Candemir S, Folio L, Siegelman J, Callaghan FM, Xue Z, Palaniappan K, Singh RK, Antani SK. Automatic tuberculosis screening using chest radiographs. IEEE Trans Med Imaging. 2014 Feb;33(2):233-45. doi: 10.1109/TMI.2013.2284099. PMID: 24108713</description>
<size>3770205534</size>
</item><item>
<title>Montgomery County X-ray Set</title>
<category>Dataset</category>
<infohash>ac786f74878a5775c81d490b23842fd4736bfe33</infohash>
<guid>https://academictorrents.com/details/ac786f74878a5775c81d490b23842fd4736bfe33</guid>
<link>https://academictorrents.com/details/ac786f74878a5775c81d490b23842fd4736bfe33</link>
<description>X-ray images in this data set have been acquired from the tuberculosis control program of the Department of Health and Human Services of Montgomery County, MD, USA. This set contains 138 posterior-anterior x-rays, of which 80 x-rays are normal and 58 x-rays are abnormal with manifestations of tuberculosis. All images are de-identified and available in DICOM format. The set covers a wide range of abnormalities, including effusions and miliary patterns. The data set includes radiology readings available as a text file. https://i.imgur.com/JhQU2S5.png Left Lung (Right lung segmentation also exists) https://i.imgur.com/72dmGdB.png ## Citation Candemir S, Jaeger S, Musco J, Xue Z, Karargyris A, Antani SK, Thoma GR, Palaniappan K. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans Med Imaging. 2014 Feb;33(2):577-90. doi: 10.1109/TMI.2013.2290491. PMID: 24239990 Jaeger S, Karargyris A, Candemir S, Folio L, Siegelman J, Callaghan FM, Xue Z, Palaniappan K, Singh RK, Antani SK. Automatic tuberculosis screening using chest radiographs. IEEE Trans Med Imaging. 2014 Feb;33(2):233-45. doi: 10.1109/TMI.2013.2284099. PMID: 24108713</description>
<size>616853875</size>
</item><item>
<title>FIGR-8</title>
<category>Dataset</category>
<infohash>303a6341bea91ab71717204631467ab9e68232bd</infohash>
<guid>https://academictorrents.com/details/303a6341bea91ab71717204631467ab9e68232bd</guid>
<link>https://academictorrents.com/details/303a6341bea91ab71717204631467ab9e68232bd</link>
<description/>
<size>6723732873</size>
</item><item>
<title>FIGR-8-SVG</title>
<category>Dataset</category>
<infohash>55911e0af5be7c7ccbbff5d35a8a8dfc2275bc50</infohash>
<guid>https://academictorrents.com/details/55911e0af5be7c7ccbbff5d35a8a8dfc2275bc50</guid>
<link>https://academictorrents.com/details/55911e0af5be7c7ccbbff5d35a8a8dfc2275bc50</link>
<description/>
<size>1917999580</size>
</item><item>
<title>Victor Yerena.txt</title>
<category>Paper</category>
<infohash>51f0aef3527d73ee84a0b031df648b349a01bd47</infohash>
<guid>https://academictorrents.com/details/51f0aef3527d73ee84a0b031df648b349a01bd47</guid>
<link>https://academictorrents.com/details/51f0aef3527d73ee84a0b031df648b349a01bd47</link>
<description>Prueba para compartir archivos</description>
<size>18</size>
</item><item>
<title>IDRiD (Indian Diabetic Retinopathy Image Dataset)</title>
<category>Dataset</category>
<infohash>3bb974ffdad31f9df9d26a63ed2aea2f1d789405</infohash>
<guid>https://academictorrents.com/details/3bb974ffdad31f9df9d26a63ed2aea2f1d789405</guid>
<link>https://academictorrents.com/details/3bb974ffdad31f9df9d26a63ed2aea2f1d789405</link>
<description>IDRiD (Indian Diabetic Retinopathy Image Dataset), is the first database representative of an Indian population. Moreover, it is the only dataset constituting typical diabetic retinopathy lesions and also normal retinal structures annotated at a pixel level. This dataset provides information on the disease severity of diabetic retinopathy, and diabetic macular edema for each image. This makes it perfect for development and evaluation of image analysis algorithms for early detection of diabetic retinopathy. This dataset was available as a part of "Diabetic Retinopathy: Segmentation and Grading Challenge" organised in conjuction with IEEE International Symposium on Biomedical Imaging (ISBI-2018), Washington D.C. The dataset is divided into three parts: A. Segmentation: It consists of 1. Original color fundus images (81 images divided into train and test set - JPG Files) 2. Groundtruth images for the Lesions (Microaneurysms, Haemorrhages, Hard Exudates and Soft Exudates divided into train and test set - TIF Files) and Optic Disc (divided into train and test set - TIF Files) B. Disease Grading: it consists of 1. Original color fundus images (516 images divided into train set (413 images) and test set (103 images) - JPG Files) 2. Groundtruth Labels for Diabetic Retinopathy and Diabetic Macular Edema Severity Grade (Divided into train and test set - CSV File) C. Localization: It consists of 1. Original color fundus images (516 images divided into train set (413 images) and test set (103 images) - JPG Files) 2. Groundtruth Labels for Optic Disc Center Location (Divided into train and test set - CSV File) 3. Groundtruth Labels for Fovea Center Location (Divided into train and test set - CSV File) For more information visit idrid.grand-challenge.org Sample images (scaled down) https://i.imgur.com/gajYxoR.png Sample segmentations of microaneurysms (scaled down) https://i.imgur.com/f8irOmW.png Paper:</description>
<size>1010096056</size>
</item><item>
<title>Kaggle Diabetic Retinopathy Detection Training Dataset (DRD)</title>
<category>Dataset</category>
<infohash>08c244595c6cc4ec403b21023cf99c2b085cbc72</infohash>
<guid>https://academictorrents.com/details/08c244595c6cc4ec403b21023cf99c2b085cbc72</guid>
<link>https://academictorrents.com/details/08c244595c6cc4ec403b21023cf99c2b085cbc72</link>
<description>This dataset is a large set of high-resolution retina images taken under a variety of imaging conditions. A left and right field is provided for every subject. Images are labeled with a subject id as well as either left or right (e.g. 1_left.jpeg is the left eye of patient id 1). A clinician has rated the presence of diabetic retinopathy in each image on a scale of 0 to 4, according to the following scale:     0 - No DR 1 - Mild 2 - Moderate 3 - Severe 4 - Proliferative DR     Total Images: 35126. The distribution of labels is: 0: 25810, 1: 2443, 2: 5292, 4: 708, 3: 873 Your task is to create an automated analysis system capable of assigning a score based on this scale. The images in the dataset come from different models and types of cameras, which can affect the visual appearance of left vs. right. Some images are shown as one would see the retina anatomically (macula on the left, optic nerve on the right for the right eye). Others are shown as one would see through a microscope condensing lens (i.e. inverted, as one sees in a typical live eye exam). There are generally two ways to tell if an image is inverted: It is inverted if the macula (the small dark central area) is slightly higher than the midline through the optic nerve. If the macula is lower than the midline of the optic nerve, it s not inverted. If there is a notch on the side of the image (square, triangle, or circle) then it s not inverted. If there is no notch, it s inverted. Like any real-world data set, you will encounter noise in both the images and labels. Images may contain artifacts, be out of focus, underexposed, or overexposed. A major aim of this competition is to develop robust algorithms that can function in the presence of noise and variation. https://i.imgur.com/Tmba2IF.png</description>
<size>34999421799</size>
</item><item>
<title>ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events</title>
<category>Dataset</category>
<infohash>c5bf370a90cae548d5a306c1be7d79186b9f60b9</infohash>
<guid>https://academictorrents.com/details/c5bf370a90cae548d5a306c1be7d79186b9f60b9</guid>
<link>https://academictorrents.com/details/c5bf370a90cae548d5a306c1be7d79186b9f60b9</link>
<description>The detection and identification of extreme weather events in large-scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, many different types of spatially localized climate patterns are of interest including hurricanes, extra-tropical cyclones, weather fronts, and blocking events among others. Existing labeled data for these patterns can be incomplete in various ways, such as covering only certain years or geographic areas and having false negatives. This type of climate data therefore poses a number of interesting machine learning challenges. We present a multichannel spatiotemporal CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. We demonstrate that our approach is able to leverage temporal information and unlabeled data to improve the localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data. We present a dataset, ExtremeWeather, to encourage machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change. The dataset is available at extremeweatherdataset.github.io and the code is available at https://github.com/eracah/hur-detect. ## Citation Racah, Evan, et al. "ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events." Advances in Neural Information Processing Systems. 2017. ## Pictures https://extremeweatherdataset.github.io/variables.jpg</description>
<size>1652894362654</size>
</item><item>
<title>Common Crawl corpus - training-parallel-commoncrawl.tgz (CS-EN, DE-EN, ES-EN, FR-EN, RU-EN)</title>
<category>Dataset</category>
<infohash>2a4e272c4fd06abc3b3ee022fd2fd9e220b37c33</infohash>
<guid>https://academictorrents.com/details/2a4e272c4fd06abc3b3ee022fd2fd9e220b37c33</guid>
<link>https://academictorrents.com/details/2a4e272c4fd06abc3b3ee022fd2fd9e220b37c33</link>
<description/>
<size>918311367</size>
</item><item>
<title>UN corpus - training-parallel-un.tgz (ES-EN, FR-EN)</title>
<category>Dataset</category>
<infohash>e4dc3c28d6035a64af928dbdcbc8d6cc0d62d39c</infohash>
<guid>https://academictorrents.com/details/e4dc3c28d6035a64af928dbdcbc8d6cc0d62d39c</guid>
<link>https://academictorrents.com/details/e4dc3c28d6035a64af928dbdcbc8d6cc0d62d39c</link>
<description/>
<size>2365634246</size>
</item><item>
<title>Europarl v7 - training-parallel-europarl-v7.tgz (CS-EN, DE-EN, ES-EN, FR-EN)</title>
<category>Dataset</category>
<infohash>2c4dbfe50cda15026ebc2579b13edd532b10e911</infohash>
<guid>https://academictorrents.com/details/2c4dbfe50cda15026ebc2579b13edd532b10e911</guid>
<link>https://academictorrents.com/details/2c4dbfe50cda15026ebc2579b13edd532b10e911</link>
<description/>
<size>657632379</size>
</item><item>
<title>Semantic Web 2019 (ENG)</title>
<category>Course</category>
<infohash>41c49b7b3ef5d6e2f02ba9b09525dd8d377b72ca</infohash>
<guid>https://academictorrents.com/details/41c49b7b3ef5d6e2f02ba9b09525dd8d377b72ca</guid>
<link>https://academictorrents.com/details/41c49b7b3ef5d6e2f02ba9b09525dd8d377b72ca</link>
<description>Video Lectures of the short course on Semantic Web taken at Politecnico di Torino (Italy), in January 2019.</description>
<size>1776039829</size>
</item><item>
<title>The Analytics Edge [edX] Summer 2015</title>
<category>Course</category>
<infohash>1d4ec1fc1aca061e71cc504f93a7f9fca9ef7311</infohash>
<guid>https://academictorrents.com/details/1d4ec1fc1aca061e71cc504f93a7f9fca9ef7311</guid>
<link>https://academictorrents.com/details/1d4ec1fc1aca061e71cc504f93a7f9fca9ef7311</link>
<description>The Analytics Edge Through inspiring examples and stories, discover the power of data and use analytics to provide an edge to your career and your life. About this course Skip Course Description In the last decade, the amount of data available to organizations has reached unprecedented levels. Data is transforming business, social interactions, and the future of our society. In this course, you will learn how to use data and analytics to give an edge to your career and your life. We will examine real world examples of how analytics have been used to significantly improve a business or industry. These examples include Moneyball, eHarmony, the Framingham Heart Study, Twitter, IBM Watson, and Netflix. Through these examples and many more, we will teach you the following analytics methods: linear regression, logistic regression, trees, text analytics, clustering, visualization, and optimization. We will be using the statistical software R to build models and work with data. The contents of this course are essentially the same as those of the corresponding MIT class (The Analytics Edge). It is a challenging class, but it will enable you to apply analytics to real-world applications. The class will consist of lecture videos, which are broken into small pieces, usually between 4 and 8 minutes each. After each lecture piece, we will ask you a “quick question” to assess your understanding of the material. There will also be a recitation, in which one of the teaching assistants will go over the methods introduced with a new example and data set. Each week will have a homework assignment that involves working in R or LibreOffice with various data sets. (R is a free statistical and computing software environment we’ll use in the course. See the Software FAQ below for more info). At the end of the class there will be a final exam, which will be similar to the homework assignments. What you ll learn An applied understanding of many different analytics methods, including linear regression, logistic regression, CART, clustering, and data visualization How to implement all of these methods in R An applied understanding of mathematical optimization and how to solve optimization models in spreadsheet software Prerequisites Basic mathematical knowledge (at a high school level). You should be familiar with concepts like mean, standard deviation, and scatterplots. Mathematical maturity and prior experience with programming will decrease the estimated effort required for the class, but are not necessary to succeed. https://www.edx.org/course/the-analytics-edge-0</description>
<size>3067319577</size>
</item><item>
<title>DeepLesion (10,594 CT scans with lesions)</title>
<category>Dataset</category>
<infohash>de50f4d4aa3d028944647a56199c07f5fa6030ff</infohash>
<guid>https://academictorrents.com/details/de50f4d4aa3d028944647a56199c07f5fa6030ff</guid>
<link>https://academictorrents.com/details/de50f4d4aa3d028944647a56199c07f5fa6030ff</link>
<description>## Introduction The DeepLesion dataset contains 32,120 axial computed tomography (CT) slices from 10,594 CT scans (studies) of 4,427 unique patients. There are 1–3 lesions in each image with accompanying bounding boxes and size measurements, adding up to 32,735 lesions altogether. The lesion annotations were mined from NIH’s picture archiving and communication system (PACS). Some meta-data are also provided. The contents include: - Folder “Images\_png”: png image files. We named each slice with the format “patient index\_study index\_series index\_slice index.png”, with the last underscore being / or \ to indicate sub-folders. The images are stored in unsigned 16 bit. One should subtract 32768 from the pixel intensity to obtain the original Hounsfield unit (HU) values. We provide not only the key CT slice that contains the lesion annotation, but also its 3D context (30mm extra slices above and below the key slice). Due to the large size of the data and the file size limit of the website, we packed them to 56 smaller zip files for downloading. - Key_slices.zip: key slices with overlaid lesion annotations for review purposes. - Folder “Key_slice_examples”: random image examples chosen from Key_slices.zip. - DL_info.csv: The annotations and meta-data. See Section “Annotations” below. ## Reference Ke Yan, Xiaosong Wang, Le Lu, Ronald M. Summers, "DeepLesion: Automated Mining of Large-Scale Lesion Annotations and Universal Lesion Detection with Deep Learning", Journal of Medical Imaging 5(3), 036501 (2018), doi: 10.1117/1.JMI.5.3.036501 ## Annotations In DL_info.csv, each row is the information of a lesion in DeepLesion. The meaning of the columns are: 1. File name. Please replace the last underscore with / or \ to indicate sub-folders. 2. Patient index starting from 1. 3. Study index for each patient starting from 1. There are 1~26 studies for each patient. 4. Series ID. 5. Slice index of the key slice containing the lesion annotation, starting from 1. 6. 8D vector, the image coordinates (in pixel) of the two RECIST diameters of the lesion. [x11, y11, x12, y12, x21, y21, x22, y22]. The first 4 coordinates are for the long axis. Please see our paper and its supplementary material for further explanation. 7. 4D vector, the bounding-box [x1, y1, x2, y2] of the lesion (in pixel) estimated from the RECIST diameters, see our paper 8. 2D vector, the lengths of the long and short axes. The unit is pixels. 9. The relative body position of the center of the lesion. The z-coordinates were predicted by the self-supervised body part regressor. See our paper for details. The coordinates are approximate and just for reference. 10. The type of the lesion. Types 1~8 correspond to bone, abdomen, mediastinum, liver, lung, kidney, soft tissue, and pelvis, respectively. See our paper for details. The lesion types are coarsely defined and just for reference. Only the lesions in the val and test sets were annotated with others denoted as -1. 11. This field is set to 1 if the annotation of this lesion is possibly noisy according to manual check. We found 35 noisy annotations out of 32,735 till now. 12. Slice range. Context slices neighboring to the key slice were provided in this dataset. For example, in the first lesion, the key slice is 109 and the slice range is 103~115, meaning that slices 103~115 are provided. For most lesions, we provide 30mm extra slices above and below the key slice, unless the long axis of the lesion is larger than this thickness (then we provide more) or the beginning or end of the volume is reached. 13. Spacing (mm per pixel) of the x, y, and z axes. The 3rd value is the slice interval, or the physical distance between two slices. 14. Image size. 15. The windowing (min~max) in Hounsfield unit extracted from the original DICOM file. 16. Patient gender. F for female and M for male. 17. Patient age.</description>
<size>243037288033</size>
</item><item>
<title>openem_example_data</title>
<category>Dataset</category>
<infohash>b2a418e07b033bbb37ff46d030d9633d365c148e</infohash>
<guid>https://academictorrents.com/details/b2a418e07b033bbb37ff46d030d9633d365c148e</guid>
<link>https://academictorrents.com/details/b2a418e07b033bbb37ff46d030d9633d365c148e</link>
<description>Dataset to run OpenEM examples and tutorial.</description>
<size>102132625822</size>
</item><item>
<title>M-PACT: Michigan Platform for Activity Classification in Tensorflow</title>
<category>Dataset</category>
<infohash>dcea7fa53925b31215bd8437d2f0805253d6b00f</infohash>
<guid>https://academictorrents.com/details/dcea7fa53925b31215bd8437d2f0805253d6b00f</guid>
<link>https://academictorrents.com/details/dcea7fa53925b31215bd8437d2f0805253d6b00f</link>
<description>There are many hurdles that prevent the replication of existing work which hinders the development of new activity classification models. These hurdles include switching between multiple deep learning libraries and the development of boilerplate experimental pipelines. We present M-PACT to overcome existing issues by removing the need to develop boilerplate code which allows users to quickly prototype action classification models while leveraging existing state-of-the-art (SOTA) models available in the platform. M-PACT is the first to offer four SOTA activity classification models, I3D, C3D, ResNet50+LSTM, and TSN, under a single platform with reproducible competitive results. This platform allows for the generation of models and results over activity recognition datasets through the use of modular code, various preprocessing and neural network layers, and seamless data flow. In this paper, we present the system architecture, detail the functions of various modules, and describe the basic tools to develop a new model in M-PACT.</description>
<size>924280471</size>
</item><item>
<title>02CIX 2018-2019 Sistemi Informativi Aziendali (ITA)</title>
<category>Course</category>
<infohash>a5f1f19c2f6872cbc83c460e372f0ae7e15bf770</infohash>
<guid>https://academictorrents.com/details/a5f1f19c2f6872cbc83c460e372f0ae7e15bf770</guid>
<link>https://academictorrents.com/details/a5f1f19c2f6872cbc83c460e372f0ae7e15bf770</link>
<description>Video-lezioni Sistemi Informativi Aziendali</description>
<size>9894967027</size>
</item><item>
<title>Condensing Steam: Distilling the Diversity of Gamer Behavior</title>
<category>Dataset</category>
<infohash>eba3b48fcdaa9e69a927051f1678251a86a546f3</infohash>
<guid>https://academictorrents.com/details/eba3b48fcdaa9e69a927051f1678251a86a546f3</guid>
<link>https://academictorrents.com/details/eba3b48fcdaa9e69a927051f1678251a86a546f3</link>
<description>109 MILLION GAMERS 716 MILLION GAMES 1.1 MILLION YEARS OF PLAYTIME A dataset collected and analyzed for the 2016 ACM Internet Measurement Conference article by Mark O Neill, Justin Wu, Elham Vaziripour, and Daniel Zappala Table and attribute descriptions</description>
<size>18302328983</size>
</item><item>
<title>PaperDoll Raw Dataset</title>
<category>Dataset</category>
<infohash>e829e7857cc6fae44428122fe5d16ad22963b827</infohash>
<guid>https://academictorrents.com/details/e829e7857cc6fae44428122fe5d16ad22963b827</guid>
<link>https://academictorrents.com/details/e829e7857cc6fae44428122fe5d16ad22963b827</link>
<description>Web-crawled data from Chictopia in Fall 2012. The use of this dataset is subject to copyright law. We release this dataset under [Japanese legislation in effect from Jan 2019](http://eare.eu/japan-amends-tdm-exception-copyright/), but the actual law enforcement is subject to the rule of each country.</description>
<size>43424563065</size>
</item><item>
<title>regnet.pkl</title>
<category>Dataset</category>
<infohash>e109e087a8fc8aec45bae3a74a193922ce27fc58</infohash>
<guid>https://academictorrents.com/details/e109e087a8fc8aec45bae3a74a193922ce27fc58</guid>
<link>https://academictorrents.com/details/e109e087a8fc8aec45bae3a74a193922ce27fc58</link>
<description>In this work, we build a knowledge-based database, named  RegNetwork , of gene regulatory networks for human and mouse by collecting and integrating the documented regulatory interactions among transcription factors (TFs), microRNAs (miRNAs) and target genes from 25 selected databases. Moreover, we also inferred and incorporated potential regulatory relationships based on transcription factor binding site (TFBS) motifs into RegNetwork. As a result, RegNetwork contains a comprehensive set of experimentally observed or predicted transcriptional and post-transcriptional regulatory relationships, and the database framework is flexibly designed for potential extensions to include gene regulatory networks for other organisms in the future.</description>
<size>8809906</size>
</item><item>
<title>genemania.pkl</title>
<category>Dataset</category>
<infohash>5adbacb0b7ea663ac4a7758d39250a1bd28c5b40</infohash>
<guid>https://academictorrents.com/details/5adbacb0b7ea663ac4a7758d39250a1bd28c5b40</guid>
<link>https://academictorrents.com/details/5adbacb0b7ea663ac4a7758d39250a1bd28c5b40</link>
<description>A pickled networkx file containing 16,300 human genes and their associations in an undirected graph with 264,657 edges. GeneMania (Warde-Farley et al., 2010) is a combination of previously published protein-protein interaction and co-expression graphs.</description>
<size>9614641</size>
</item><item>
<title>Phishing corpus</title>
<category>Dataset</category>
<infohash>a77cda9a9d89a60dbdfbe581adf6e2df9197995a</infohash>
<guid>https://academictorrents.com/details/a77cda9a9d89a60dbdfbe581adf6e2df9197995a</guid>
<link>https://academictorrents.com/details/a77cda9a9d89a60dbdfbe581adf6e2df9197995a</link>
<description>Downloaded at http://monkey.org/~jose/wiki/doku.php?id=phishingcorpus (2015-02-01)</description>
<size>37482335</size>
</item><item>
<title>crossref-pre-1923-scholarly-works</title>
<category>Dataset</category>
<infohash>70ecab072b2792c9239ab8197d3f52cc1d075be1</infohash>
<guid>https://academictorrents.com/details/70ecab072b2792c9239ab8197d3f52cc1d075be1</guid>
<link>https://academictorrents.com/details/70ecab072b2792c9239ab8197d3f52cc1d075be1</link>
<description>PDF files from various sources, often with added OCR. They were all published before 1923 in international journals. I m not providing legal advice, but if you consider them simultaneously published to USA they should all be in the public domain in the USA. Yet, publishers apply indiscriminate copyright statements to the contrary, which may constitute copyfraud, and lock nearly all of them behind paywalls or other hurdles, hoping to milk some more profit for who knows how many centuries. You can also download PDFs by individual publisher, going by their DOI prefix and checking the full list of DOIs.</description>
<size>281999835136</size>
</item><item>
<title>Downsampled Open Images V4 Dataset</title>
<category>Dataset</category>
<infohash>9208d33aceb2ca3eb2beb70a192600c9c41efba1</infohash>
<guid>https://academictorrents.com/details/9208d33aceb2ca3eb2beb70a192600c9c41efba1</guid>
<link>https://academictorrents.com/details/9208d33aceb2ca3eb2beb70a192600c9c41efba1</link>
<description>This is the downsampled version of the Open Images V4 Dataset. The Open Images V4 dataset contains 15.4M bounding-boxes for 600 categories on 1.9M images and 30.1M human-verified image-level labels for 19794 categories. The dataset is available at this link. This total size of the full dataset is 18TB. There s also a smaller version which contains rescaled images to have at most 1024 pixels on the longest side. However, the total size of the rescaled dataset is still large (513GB for training, 12GB for validation and 36GB for testing). I provide a much smaller version of the Open Images Dataset V4, as inspired by Downsampled ImageNet datasets @PatrykChrabaszcz. These downsampled dataset are much smaller in size so everyone can download it with ease (59GB for training with 512px version and 16GB for training with 256px version). Experiments on these downsampled datasets are also much faster than the original. | Dataset  | Train Size | Validation Size | Test Size | Test Challenge Size | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| | Original | 513 GB     | 12 GB           | 36 GB     | 9.7 GB              | | 512px    | 52.8 GB    | 1.23 GB         | 3.72 GB   | 3.08 GB             | | 256px    | 16 GB      | 0.4 GB          | 1.14 GB   | 0.95 GB             |</description>
<size>85220313799</size>
</item><item>
<title>Labeled Optical Coherence Tomography (OCT)</title>
<category>Dataset</category>
<infohash>198145c88af9a1d61ba8070f5b05c3539896ff4e</infohash>
<guid>https://academictorrents.com/details/198145c88af9a1d61ba8070f5b05c3539896ff4e</guid>
<link>https://academictorrents.com/details/198145c88af9a1d61ba8070f5b05c3539896ff4e</link>
<description>Dataset of validated OCT images described and analyzed in "Deep learning-based classification and referral of treatable human diseases". The OCT Images are split into a training set and a testing set of independent patients. OCT Images are labeled as (disease)-(randomized patient ID)-(image number by this patient) and split into 4 directories: CNV, DME, DRUSEN, and NORMAL.     250 files in directory ./test/CNV 250 files in directory ./test/DME 250 files in directory ./test/DRUSEN 250 files in directory ./test/NORMAL 37205 files in directory ./train/CNV 11348 files in directory ./train/DME 8616 files in directory ./train/DRUSEN 26315 files in directory ./train/NORMAL     https://i.imgur.com/tsAGf0V.png ## Acknowledgements Data: https://data.mendeley.com/datasets/rscbjbr9sj/2 License: CC BY 4.0 ## Citation: Kermany D, Goldbaum M, Cai W et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell. 2018; 172(5):1122-1131. doi:10.1016/j.cell.2018.02.010. http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5</description>
<size>5793183169</size>
</item><item>
<title>Chest X-Ray Images (Pediatric Pneumonia)</title>
<category>Dataset</category>
<infohash>7208a86910cc518ae8feaa9021bf7f8565b97644</infohash>
<guid>https://academictorrents.com/details/7208a86910cc518ae8feaa9021bf7f8565b97644</guid>
<link>https://academictorrents.com/details/7208a86910cc518ae8feaa9021bf7f8565b97644</link>
<description>The dataset is organized into 3 folders (train, test, val) and contains subfolders for each image category (Pneumonia/Normal). There are 5,863 X-Ray images (JPEG) and 2 categories (Pneumonia/Normal). Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care. For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert. https://i.imgur.com/U7dBW7X.png ## Acknowledgements Data: https://data.mendeley.com/datasets/rscbjbr9sj/2 License: CC BY 4.0 ## Citation: http://www.cell.com/cell/fulltext/S0092-8674(18)30154-5</description>
<size>1236184657</size>
</item><item>
<title>comma2k19</title>
<category>Dataset</category>
<infohash>65a2fbc964078aff62076ff4e103f18b951c5ddb</infohash>
<guid>https://academictorrents.com/details/65a2fbc964078aff62076ff4e103f18b951c5ddb</guid>
<link>https://academictorrents.com/details/65a2fbc964078aff62076ff4e103f18b951c5ddb</link>
<description>comma.ai presents comma2k19, a dataset of over 33 hours of commute in California s 280 highway. This means 2019 segments, 1 minute long each, on a 20km section of highway driving between California s San Jose and San Francisco. The dataset was collected using comma EONs that have sensors similar to those of any modern smartphone including a road-facing camera, phone GPS, thermometers and a 9-axis IMU. Additionally, the EON captures raw GNSS measurements and all CAN data sent by the car with a comma grey panda. Laika, an open-source GNSS processing library, is also introduced here. Laika produces 40% more accurate positions than the GNSS module used to collect the raw data. This dataset includes pose (position + orientation) estimates in a global reference frame of the recording camera. These poses were computed with a tightly coupled INS/GNSS/Vision optimizer that relies on data processed by Laika. comma2k19 is ideal for development and validation of tightly coupled GNSS algorithms and mapping algorithms that work with commodity sensors.</description>
<size>94622767664</size>
</item><item>
<title>FlyingThings3D_subset_motion_boundary_weights.tar.bz2</title>
<category>Dataset</category>
<infohash>c7879dcb761cb49fda13f77b1be5330698a009e1</infohash>
<guid>https://academictorrents.com/details/c7879dcb761cb49fda13f77b1be5330698a009e1</guid>
<link>https://academictorrents.com/details/c7879dcb761cb49fda13f77b1be5330698a009e1</link>
<description>This torrent contains the "Motion Boundary Weights" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>12676259145</size>
</item><item>
<title>FlyingThings3D_subset_image_clean.tar.bz2</title>
<category>Dataset</category>
<infohash>d08b2b4fa4fe322038f22d1a5fbcaf9215909701</infohash>
<guid>https://academictorrents.com/details/d08b2b4fa4fe322038f22d1a5fbcaf9215909701</guid>
<link>https://academictorrents.com/details/d08b2b4fa4fe322038f22d1a5fbcaf9215909701</link>
<description>This torrent contains the "Clean pass" images (WebP format) for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>37519054864</size>
</item><item>
<title>FlyingThings3D_subset_flow_occlusion_weights.tar.bz2</title>
<category>Dataset</category>
<infohash>fa3dd093d0822a9ba767b19fd29625fb0a7d05ee</infohash>
<guid>https://academictorrents.com/details/fa3dd093d0822a9ba767b19fd29625fb0a7d05ee</guid>
<link>https://academictorrents.com/details/fa3dd093d0822a9ba767b19fd29625fb0a7d05ee</link>
<description>This torrent contains the "Flow Occlusion Weights" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>16082425037</size>
</item><item>
<title>FlyingThings3D_subset_flow.tar.bz2</title>
<category>Dataset</category>
<infohash>3cc7a1e0066302857ba8aa8e39e509e6332570b2</infohash>
<guid>https://academictorrents.com/details/3cc7a1e0066302857ba8aa8e39e509e6332570b2</guid>
<link>https://academictorrents.com/details/3cc7a1e0066302857ba8aa8e39e509e6332570b2</link>
<description>This torrent contains the "Optical Flow" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>80107266123</size>
</item><item>
<title>FlyingThings3D_subset_disparity.tar.bz2</title>
<category>Dataset</category>
<infohash>4ef721c60242c3a509383335bac038754473adff</infohash>
<guid>https://academictorrents.com/details/4ef721c60242c3a509383335bac038754473adff</guid>
<link>https://academictorrents.com/details/4ef721c60242c3a509383335bac038754473adff</link>
<description>This torrent contains the "Disparity" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>5364443175</size>
</item><item>
<title>FlyingThings3D_subset_disparity_occlusion_weights.tar.bz2</title>
<category>Dataset</category>
<infohash>577500cf640dcd86ec6da416670ac29714cc37fc</infohash>
<guid>https://academictorrents.com/details/577500cf640dcd86ec6da416670ac29714cc37fc</guid>
<link>https://academictorrents.com/details/577500cf640dcd86ec6da416670ac29714cc37fc</link>
<description>This torrent contains the "Disparity Occlusion Weights" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>9623054091</size>
</item><item>
<title>FlyingThings3D_subset_disparity_change.tar.bz2</title>
<category>Dataset</category>
<infohash>b8a921d40b00ae6c572a3942cbbbab18540a8b94</infohash>
<guid>https://academictorrents.com/details/b8a921d40b00ae6c572a3942cbbbab18540a8b94</guid>
<link>https://academictorrents.com/details/b8a921d40b00ae6c572a3942cbbbab18540a8b94</link>
<description>This torrent contains the "Disparity Change" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>2486120185</size>
</item><item>
<title>FlyingThings3D_subset_depth_boundary_weights.tar.bz2</title>
<category>Dataset</category>
<infohash>0b351cf2ba27a862610f6b4ecdf2b398b543556e</infohash>
<guid>https://academictorrents.com/details/0b351cf2ba27a862610f6b4ecdf2b398b543556e</guid>
<link>https://academictorrents.com/details/0b351cf2ba27a862610f6b4ecdf2b398b543556e</link>
<description>This torrent contains the "Depth Boundary Weights" data for the DispNet/FlowNet2.0 subset of the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>10741354970</size>
</item><item>
<title>30M Factoid Question-Answer Corpus (30MQA)</title>
<category>Dataset</category>
<infohash>973fb709bdb9db6066213bbc5529482a190098ce</infohash>
<guid>https://academictorrents.com/details/973fb709bdb9db6066213bbc5529482a190098ce</guid>
<link>https://academictorrents.com/details/973fb709bdb9db6066213bbc5529482a190098ce</link>
<description>The 30M Factoid Question-Answer Corpus consists of 30M natural language questions in English and their corresponding facts in the knowledge base Freebase. The dataset is formatted as a text file, where each line contains:     &lt;subject&gt; \t &lt;relationship&gt; \t &lt;object&gt; \t natural language question,     where &lt;subject&gt;, &lt;relationship&gt; and &lt;object&gt; are  the subject, relationship and object identifier in Freebase corresponding to the natural language question. For a more detailed description, have a look at our paper: Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus http://arxiv.org/abs/1603.06807 Sample:     &lt;http://rdf.freebase.com/ns/m.04whkz5&gt;	www.freebase.com/book/written_work/subjects	&lt;http://rdf.freebase.com/ns/m.01cj3p&gt;	what is the book e about ? &lt;http://rdf.freebase.com/ns/m.0tp2p24&gt;	www.freebase.com/music/release_track/release	&lt;http://rdf.freebase.com/ns/m.0sjc7c1&gt;	in what release does the release track cardiac arrest come from ? &lt;http://rdf.freebase.com/ns/m.04j0t75&gt;	www.freebase.com/film/film/country	&lt;http://rdf.freebase.com/ns/m.07ssc&gt;	what country is the debt from ? &lt;http://rdf.freebase.com/ns/m.0ftqr&gt;	www.freebase.com/music/producer/tracks_produced	&lt;http://rdf.freebase.com/ns/m.0p600l&gt;	what songs have nobuo uematsu produced ? &lt;http://rdf.freebase.com/ns/m.036p007&gt;	www.freebase.com/music/release/producers	&lt;http://rdf.freebase.com/ns/m.0677ng&gt;	who produced eve-olution ? &lt;http://rdf.freebase.com/ns/m.0ms5mg&gt;	www.freebase.com/music/recording/artist	&lt;http://rdf.freebase.com/ns/m.0mjn2&gt;	which artist recorded most of us are sad ?</description>
<size>529342167</size>
</item><item>
<title>flyingthings3d__optical_flow.tar.bz2</title>
<category>Dataset</category>
<infohash>93a54256fe2f56dea2c7d247af11d9affa06a06d</infohash>
<guid>https://academictorrents.com/details/93a54256fe2f56dea2c7d247af11d9affa06a06d</guid>
<link>https://academictorrents.com/details/93a54256fe2f56dea2c7d247af11d9affa06a06d</link>
<description>This torrent contains the "Optical Flow" data for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>332942778936</size>
</item><item>
<title>flyingthings3d__disparity_change.tar.bz2</title>
<category>Dataset</category>
<infohash>3acbcf869ce364fcf25704daccc72daacecc8e62</infohash>
<guid>https://academictorrents.com/details/3acbcf869ce364fcf25704daccc72daacecc8e62</guid>
<link>https://academictorrents.com/details/3acbcf869ce364fcf25704daccc72daacecc8e62</link>
<description>This torrent contains the "Disparity Change" data for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>124265175440</size>
</item><item>
<title>flyingthings3d__disparity.tar.bz2</title>
<category>Dataset</category>
<infohash>3221ff49a08f5e6749f24958c1f76248fe9cb884</infohash>
<guid>https://academictorrents.com/details/3221ff49a08f5e6749f24958c1f76248fe9cb884</guid>
<link>https://academictorrents.com/details/3221ff49a08f5e6749f24958c1f76248fe9cb884</link>
<description>This torrent contains the "Disparity" data for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>93213362434</size>
</item><item>
<title>flyingthings3d__frames_finalpass_webp.tar</title>
<category>Dataset</category>
<infohash>ebecc3be97d60df0691ef4b4103eaf6e2c344a7d</infohash>
<guid>https://academictorrents.com/details/ebecc3be97d60df0691ef4b4103eaf6e2c344a7d</guid>
<link>https://academictorrents.com/details/ebecc3be97d60df0691ef4b4103eaf6e2c344a7d</link>
<description>This torrent contains the "Final pass" images (WebP format) for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>6099261440</size>
</item><item>
<title>flyingthings3d__frames_finalpass.tar</title>
<category>Dataset</category>
<infohash>48e5e770aa8469c0826ae322209cdc0ac115a385</infohash>
<guid>https://academictorrents.com/details/48e5e770aa8469c0826ae322209cdc0ac115a385</guid>
<link>https://academictorrents.com/details/48e5e770aa8469c0826ae322209cdc0ac115a385</link>
<description>This torrent contains the "Final pass" images (PNG format) for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>45212712960</size>
</item><item>
<title>flyingthings3d__frames_cleanpass_webp.tar</title>
<category>Dataset</category>
<infohash>d20b0f88033b652b84ef4fa49ebcaa7f692df1a5</infohash>
<guid>https://academictorrents.com/details/d20b0f88033b652b84ef4fa49ebcaa7f692df1a5</guid>
<link>https://academictorrents.com/details/d20b0f88033b652b84ef4fa49ebcaa7f692df1a5</link>
<description>This torrent contains the "Clean pass" images (WebP format) for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>7839150080</size>
</item><item>
<title>flyingthings3d__frames_cleanpass.tar</title>
<category>Dataset</category>
<infohash>20afbe18b5d1754b75deeefe4c2c74b17c9ea792</infohash>
<guid>https://academictorrents.com/details/20afbe18b5d1754b75deeefe4c2c74b17c9ea792</guid>
<link>https://academictorrents.com/details/20afbe18b5d1754b75deeefe4c2c74b17c9ea792</link>
<description>This torrent contains the "Clean pass" images (PNG format) for the "FlyingThings3D" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>39473469440</size>
</item><item>
<title>driving__optical_flow.tar.bz2</title>
<category>Dataset</category>
<infohash>f0eea7805f4174265b60ccc26be05eee979d0896</infohash>
<guid>https://academictorrents.com/details/f0eea7805f4174265b60ccc26be05eee979d0896</guid>
<link>https://academictorrents.com/details/f0eea7805f4174265b60ccc26be05eee979d0896</link>
<description>This torrent contains the "Optical Flow" data for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>53078783411</size>
</item><item>
<title>driving__disparity_change.tar.bz2</title>
<category>Dataset</category>
<infohash>916e1b259a7dd818e11a643b551bbbe3125fb126</infohash>
<guid>https://academictorrents.com/details/916e1b259a7dd818e11a643b551bbbe3125fb126</guid>
<link>https://academictorrents.com/details/916e1b259a7dd818e11a643b551bbbe3125fb126</link>
<description>This torrent contains the "Disparity Change" data for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>23317626645</size>
</item><item>
<title>driving__disparity.tar.bz2</title>
<category>Dataset</category>
<infohash>1d642a371312d193ae4523e089bf127917294175</infohash>
<guid>https://academictorrents.com/details/1d642a371312d193ae4523e089bf127917294175</guid>
<link>https://academictorrents.com/details/1d642a371312d193ae4523e089bf127917294175</link>
<description>This torrent contains the "Disparity" data for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>9561161211</size>
</item><item>
<title>driving__frames_finalpass_webp.tar</title>
<category>Dataset</category>
<infohash>54e6d47f88478597405aaf53d0190ec3bf051b0e</infohash>
<guid>https://academictorrents.com/details/54e6d47f88478597405aaf53d0190ec3bf051b0e</guid>
<link>https://academictorrents.com/details/54e6d47f88478597405aaf53d0190ec3bf051b0e</link>
<description>This torrent contains the "Final pass" images (WebP format) for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>970711040</size>
</item><item>
<title>driving__frames_finalpass.tar</title>
<category>Dataset</category>
<infohash>fcad06599cf5de36db3c250f6d90d4f9e7db3421</infohash>
<guid>https://academictorrents.com/details/fcad06599cf5de36db3c250f6d90d4f9e7db3421</guid>
<link>https://academictorrents.com/details/fcad06599cf5de36db3c250f6d90d4f9e7db3421</link>
<description>This torrent contains the "Final pass" images (PNG format) for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>6548582400</size>
</item><item>
<title>driving__frames_cleanpass_webp.tar</title>
<category>Dataset</category>
<infohash>a52319069b8e8d4d8b4ada6251f031ed8cf7cecc</infohash>
<guid>https://academictorrents.com/details/a52319069b8e8d4d8b4ada6251f031ed8cf7cecc</guid>
<link>https://academictorrents.com/details/a52319069b8e8d4d8b4ada6251f031ed8cf7cecc</link>
<description>This torrent contains the "Clean pass" images (WebP format) for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>1548083200</size>
</item><item>
<title>driving__frames_cleanpass.tar</title>
<category>Dataset</category>
<infohash>ea392433e3dfcb4b83dcd3300dfa9b9919ef8e1f</infohash>
<guid>https://academictorrents.com/details/ea392433e3dfcb4b83dcd3300dfa9b9919ef8e1f</guid>
<link>https://academictorrents.com/details/ea392433e3dfcb4b83dcd3300dfa9b9919ef8e1f</link>
<description>This torrent contains the "Clean pass" images (PNG format) for the "Driving" dataset from the CVPR 2016 paper "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation" by Mayer et al. (https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html).</description>
<size>6681896960</size>
</item><item>
<title>Indiana University - Chest X-Rays (PNG Images)</title>
<category>Dataset</category>
<infohash>5a3a439df24931f410fac269b87b050203d9467d</infohash>
<guid>https://academictorrents.com/details/5a3a439df24931f410fac269b87b050203d9467d</guid>
<link>https://academictorrents.com/details/5a3a439df24931f410fac269b87b050203d9467d</link>
<description>1000 radiology reports for the chest x-ray images from the Indiana University hospital network. To identify images associated with the reports, use XML tag. More than one image could be associated with a report) https://i.imgur.com/5uR5snH.png</description>
<size>1360814128</size>
</item><item>
<title>Indiana University - Chest X-Rays (XML Reports)</title>
<category>Dataset</category>
<infohash>66450ba52ba3f83fbf82ef9c91f2bde0e845aba9</infohash>
<guid>https://academictorrents.com/details/66450ba52ba3f83fbf82ef9c91f2bde0e845aba9</guid>
<link>https://academictorrents.com/details/66450ba52ba3f83fbf82ef9c91f2bde0e845aba9</link>
<description>1000 radiology reports for the chest x-ray images from the Indiana University hospital network.</description>
<size>1112632</size>
</item><item>
<title>Tom Mitchell - Machine Learning  - 2012</title>
<category>Course</category>
<infohash>35b6b8bf0c2931ba7ecd8a1a8e65fa32f3e7473f</infohash>
<guid>https://academictorrents.com/details/35b6b8bf0c2931ba7ecd8a1a8e65fa32f3e7473f</guid>
<link>https://academictorrents.com/details/35b6b8bf0c2931ba7ecd8a1a8e65fa32f3e7473f</link>
<description>http://www.cs.cmu.edu/~tom/10601_fall2012/lectures.shtml</description>
<size>5866402365</size>
</item><item>
<title>Zinc Molecule Dataset from Constrained Graph Variational Autoencoders for Molecule Design</title>
<category>Dataset</category>
<infohash>4776b264ca3c4ed05530124b6319ce0d45aff626</infohash>
<guid>https://academictorrents.com/details/4776b264ca3c4ed05530124b6319ce0d45aff626</guid>
<link>https://academictorrents.com/details/4776b264ca3c4ed05530124b6319ce0d45aff626</link>
<description>ZINC is a free database of commercially-available compounds for virtual screening. ZINC contains over 230 million purchasable compounds in ready-to-dock, 3D formats. ZINC also contains over 750 million purchasable compounds you can search for analogs in under a minute. Sterling and Irwin, J. Chem. Inf. Model, 2015 http://pubs.acs.org/doi/abs/10.1021/acs.jcim.5b00559 This particular dataset comes from the paper Constrained Graph Variational Autoencoders for Molecule Design, Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, Alexander L. Gaunt.</description>
<size>33939265</size>
</item><item>
<title>The PatchCamelyon benchmark dataset (PCAM)</title>
<category>Dataset</category>
<infohash>1561a180b11d4b746273b5ce46772ad36f1229b6</infohash>
<guid>https://academictorrents.com/details/1561a180b11d4b746273b5ce46772ad36f1229b6</guid>
<link>https://academictorrents.com/details/1561a180b11d4b746273b5ce46772ad36f1229b6</link>
<description>The PatchCamelyon benchmark is a new and challenging image classification dataset. It consists of 327.680 color images (96 x 96px) extracted from histopathologic scans of lymph node sections. Each image is annoted with a binary label indicating presence of metastatic tissue. PCam provides a new benchmark for machine learning models: bigger than CIFAR10, smaller than imagenet, trainable on a single GPU. ## Why PCam Fundamental machine learning advancements are predominantly evaluated on straight-forward natural-image classification datasets. Think MNIST, CIFAR, SVHN. Medical imaging is becoming one of the major applications of ML and we believe it deserves a spot on the list of go-to ML datasets. Both to challenge future work, and to steer developments into directions that are beneficial for this domain. We think PCam can play a role in this. It packs the clinically-relevant task of metastasis detection into a straight-forward binary image classification task, akin to CIFAR-10 and MNIST. Models can easily be trained on a single GPU in a couple hours, and achieve competitive scores in the Camelyon16 tasks of tumor detection and WSI diagnosis. Furthermore, the balance between task-difficulty and tractability makes it a prime suspect for fundamental machine learning research on topics as active learning, model uncertainty and explainability. https://github.com/basveeling/pcam/raw/master/pcam.jpg</description>
<size>8061211742</size>
</item><item>
<title>University of Washington - Pedro Domingos - Machine Learning</title>
<category>Course</category>
<infohash>0db676a6aaff8c33f9749d5f9c0fa22bf336bc76</infohash>
<guid>https://academictorrents.com/details/0db676a6aaff8c33f9749d5f9c0fa22bf336bc76</guid>
<link>https://academictorrents.com/details/0db676a6aaff8c33f9749d5f9c0fa22bf336bc76</link>
<description>Video Lecture of Course Data Mining &amp; Machine Learning by Prof Pedro Domingos, University of Washington USA.</description>
<size>9066763048</size>
</item><item>
<title>POLEN23E: image dataset for the Brazilian Savannah pollen types</title>
<category>Dataset</category>
<infohash>ee51ec7708b35b023caba4230c871ae1fa254ab3</infohash>
<guid>https://academictorrents.com/details/ee51ec7708b35b023caba4230c871ae1fa254ab3</guid>
<link>https://academictorrents.com/details/ee51ec7708b35b023caba4230c871ae1fa254ab3</link>
<description>The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. https://i.imgur.com/P2bNuVi.png Citation: Gonçalves AB, Souza JS, Silva GGd, Cereda MP, Pott A, Naka MH, et al. (2016) Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains. PLoS ONE 11(6): e0157044. https://doi.org/10.1371/journal.pone.0157044 The link for the dataset is: http://dx.doi.org/10.6084/m9.figshare.1525086.</description>
<size>34561190</size>
</item><item>
<title>BRATS2013 Tumor-NoTumor Dataset (T-NT)</title>
<category>Dataset</category>
<infohash>d52ccc21455c7a82fd6e58964c89b7da99e0edf7</infohash>
<guid>https://academictorrents.com/details/d52ccc21455c7a82fd6e58964c89b7da99e0edf7</guid>
<link>https://academictorrents.com/details/d52ccc21455c7a82fd6e58964c89b7da99e0edf7</link>
<description>This dataset (called T-NT) contains images which contain or do not contain a tumor along with a segmentation of brain matter and the tumor. The goal is that it can be used to simulate bias in data in a controlled fashion. # Dataset Construction The synthetic data of the BRATS2013 dataset is used to construct this dataset. Each brain contains a tumor but it is typically only on one side. Only the right side is taken in order to have examples that do not have tumors. Each image is filtered to ensure it has enough brain in the image (more than 30% of the pixels). If the tumor takes up at least 1% of the pixels in the brain then it is considered to have a tumor. Here is an snippet from the code used to construct the dataset:     def get_labels(rightside):</description>
<size>65630646</size>
</item><item>
<title>ImageClef - IAPR TC-12 Benchmark</title>
<category>Dataset</category>
<infohash>cf870b196222cf961a01c13999be9e4b7760cef1</infohash>
<guid>https://academictorrents.com/details/cf870b196222cf961a01c13999be9e4b7760cef1</guid>
<link>https://academictorrents.com/details/cf870b196222cf961a01c13999be9e4b7760cef1</link>
<description>The following archive contains the complete IAPR TC-12 Benchmark, which is now available free of charge and without any copyright restrictions. This is the most updated version of the IAPR TC-12 Benchmark and should be used from researchers from now on. This archive thereby comprises: 20000 images 1000 additional images previously used in object annotation tasks and/or the MUSCLE live event all complete (full-text) annotations (English, German, Random) all light annotations (English, German, Spanish, Random), i.e. all annotation tags except for the description tag Note: The image collection of the IAPR TC-12 Benchmark consists of 20,000 still natural images taken from locations around the world and comprising an assorted cross-section of still natural images. This includes pictures of different sports and actions, photographs of people, animals, cities, landscapes and many other aspects of contemporary life. Example images can be found in Section 2. Each image is associated with a text caption in up to three different languages (English, German and Spanish) . These annotations are stored in a database which is managed by a benchmark administration system that allows the specification of parameters according to which different subsets of the image collection can be generated. Section 3 provides more information and an annotation example. The IAPR TC-12 Benchmark is now available free of charge and without copyright restrictions. Information on how to access (and download) the complete benchmark as well as the resources used at ImageCLEFphoto 2006 - 2008 is given in Sections 4 and 5, while Section 6 provides links to related publications. 2 Collection Content The 20,000 images are high quality, multi-object, colour photographs that have been chosen according to strict image selection rules (see [2] for more details). Here are a couple of example images of some chosen categories: In publications based on the IAPR TC-12 Benchmark and/or the use of its data or a subset thereof, please cite the following publication: The IAPR Benchmark: A New Evaluation Resource for Visual Information Systems, Grubinger, Michael, Clough Paul D., Müller Henning, and Deselaers Thomas , International Conference on Language Resources and Evaluation, 24/05/2006, Genoa, Italy, (2006) Additional information on this data is available from the PhD thesis of Michael Grubinger: Michael Grubinger. Analysis and Evaluation of Visual Information Systems Performance. PhD Thesis. School of Computer Science and Mathematics, Faculty of Health, Engineering and Science, Victoria University, Melbourne, Australia, 2007. The thesis is available here: http://nla.gov.au/anbd.bib-an43036734 http://wallaby.vu.edu.au/adt-VVUT/public/adt-VVUT20080408.130459/index.html Data can also be downloaded here: http://www-i6.informatik.rwth-aachen.de/imageclef/resources/iaprtc12.tgz ![](https://i.imgur.com/bWnsHUj.png)</description>
<size>1764963259</size>
</item><item>
<title>comma.ai driving dataset</title>
<category>Dataset</category>
<infohash>58c41e8bcc8eb4e2204a3b263cdf728c0a7331eb</infohash>
<guid>https://academictorrents.com/details/58c41e8bcc8eb4e2204a3b263cdf728c0a7331eb</guid>
<link>https://academictorrents.com/details/58c41e8bcc8eb4e2204a3b263cdf728c0a7331eb</link>
<description>This dataset contains more than seven hours of highway driving for you to use in your projects. Details included within the dataset are: - The speed of the car - The acceleration - The steering angle - GPS coordinates You won’t need to register on the site to download the dataset, it can be downloaded with a single click. For your convenience, we’ve included the direct download links below so you can instantly download and use them! ![](https://i.imgur.com/X6LA8Qm.gif) 45 GB compressed, 80 GB uncompressed     dog/2016-01-30&amp;mdash;11-24-51 (7.7G) dog/2016-01-30&amp;mdash;13-46-00 (8.5G) dog/2016-01-31&amp;mdash;19-19-25 (3.0G) dog/2016-02-02&amp;mdash;10-16-58 (8.1G) dog/2016-02-08&amp;mdash;14-56-28 (3.9G) dog/2016-02-11&amp;mdash;21-32-47 (13G) dog/2016-03-29&amp;mdash;10-50-20 (12G) emily/2016-04-21&amp;mdash;14-48-08 (4.4G) emily/2016-05-12&amp;mdash;22-20-00 (7.5G) frodo/2016-06-02&amp;mdash;21-39-29 (6.5G) frodo/2016-06-08&amp;mdash;11-46-01 (2.7G)     Dataset referenced on this page is copyrighted by comma.ai and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. ## Dataset structure The dataset consists of 10 videos clips of variable size recorded at 20 Hz with a camera mounted on the windshield of an Acura ILX 2016. In parallel to the videos we also recorded some measurements such as car s speed, acceleration, steering angle, GPS coordinates, gyroscope angles. See the full  log  list [here](Logs.md). These measurements are transformed into a uniform 100 Hz time base. The dataset folder structure is the following:    bash +&amp;mdash; dataset |   +&amp;mdash; camera |   |   +&amp;mdash; 2016-04-21&amp;mdash;14-48-08 |   |   ... |   +&amp;mdash; log |   |   +&amp;mdash; 2016-04-21&amp;mdash;14-48-08 |   |   ...     All the files come in hdf5 format and are named with the time they were recorded. The camera dataset has shape  number_frames x 3 x 160 x 320  and  uint8  type. One of the  log  hdf5-datasets is called  cam1_ptr  and addresses the alignment between camera frames and the other measurements.</description>
<size>48284827648</size>
</item><item>
<title>The Blackbird Dataset: A large-scale dataset for UAV perception in aggressive flight</title>
<category>Dataset</category>
<infohash>eb542a231dbeb2125e4ec88ddd18841a867c2656</infohash>
<guid>https://academictorrents.com/details/eb542a231dbeb2125e4ec88ddd18841a867c2656</guid>
<link>https://academictorrents.com/details/eb542a231dbeb2125e4ec88ddd18841a867c2656</link>
<description>The Blackbird unmanned aerial vehicle (UAV) dataset is a large-scale indoor dataset collected using a custom-built quadrotor platform for use in evaluation of agile perception. The dataset contains over 10 hours of flight data from 168 flights over 17 flight trajectories and 5 environments at velocities up to 8.0 m/s. Each flight includes sensor data from 120 Hz stereo and downward-facing photorealistic virtual cameras, 100 Hz IMU, 190 Hz motor speed sensors, and 360 Hz millimeter-accurate motion capture ground truth. Camera images for each flight were photorealistically rendered using FlightGoggles across a variety of environments to facilitate experimentation of perception algorithms. The dataset is available at http://blackbird-dataset.mit.edu. # Citation</description>
<size>4789641252217</size>
</item><item>
<title>COCO 2017</title>
<category>Dataset</category>
<infohash>74dec1dd21ae4994dfd9069f9cb0443eb960c962</infohash>
<guid>https://academictorrents.com/details/74dec1dd21ae4994dfd9069f9cb0443eb960c962</guid>
<link>https://academictorrents.com/details/74dec1dd21ae4994dfd9069f9cb0443eb960c962</link>
<description>Probably the most widely used dataset today for object localization is COCO: Common Objects in Context. Provided here are all the files from the 2017 version, along with an additional subset dataset created by fast.ai. Details of each COCO dataset is available from the COCO dataset page. The fast.ai subset contains all images that contain one of five selected categories, restricting objects to just those five categories; the categories are: chair couch tv remote book vase.</description>
<size>52440275940</size>
</item><item>
<title>PASCAL Visual Object Classes (VOC)</title>
<category>Dataset</category>
<infohash>e6d591cef9ea2840f7d8dfb6bb0e0503d5592128</infohash>
<guid>https://academictorrents.com/details/e6d591cef9ea2840f7d8dfb6bb0e0503d5592128</guid>
<link>https://academictorrents.com/details/e6d591cef9ea2840f7d8dfb6bb0e0503d5592128</link>
<description>Standardised image data sets for object class recognition - both 2007 and 2012 versions are provided here. The 2012 version has 20 classes. The train/val data has 11,530 images containing 27,450 ROI annotated objects and 6,929 segmentations.</description>
<size>4639722845</size>
</item><item>
<title>Camvid: Motion-based Segmentation and Recognition Dataset</title>
<category>Dataset</category>
<infohash>890e6716827f31cbd096c5aee7f777e30df7094a</infohash>
<guid>https://academictorrents.com/details/890e6716827f31cbd096c5aee7f777e30df7094a</guid>
<link>https://academictorrents.com/details/890e6716827f31cbd096c5aee7f777e30df7094a</link>
<description>Segmentation dataset with per-pixel semantic segmentation of over 700 images, each inspected and confirmed by a second person for accuracy.</description>
<size>602777566</size>
</item><item>
<title>Yelp reviews - Polarity</title>
<category>Dataset</category>
<infohash>271777225ff3c6dec8055e231c70731a1da2518f</infohash>
<guid>https://academictorrents.com/details/271777225ff3c6dec8055e231c70731a1da2518f</guid>
<link>https://academictorrents.com/details/271777225ff3c6dec8055e231c70731a1da2518f</link>
<description>1,569,264 samples from the Yelp Dataset Challenge 2015. This subset has 280,000 training samples and 19,000 test samples in each polarity.</description>
<size>166373201</size>
</item><item>
<title>Yelp reviews - Full</title>
<category>Dataset</category>
<infohash>66ab083bda0c508de6c641baabb1ec17f72dc480</infohash>
<guid>https://academictorrents.com/details/66ab083bda0c508de6c641baabb1ec17f72dc480</guid>
<link>https://academictorrents.com/details/66ab083bda0c508de6c641baabb1ec17f72dc480</link>
<description>1,569,264 samples from the Yelp Dataset Challenge 2015. This full dataset has 130,000 training samples and 10,000 testing samples in each star.</description>
<size>196146755</size>
</item><item>
<title>Sogou news</title>
<category>Dataset</category>
<infohash>b2b847b5e1946b0479baa838a0b0547178e5ebe8</infohash>
<guid>https://academictorrents.com/details/b2b847b5e1946b0479baa838a0b0547178e5ebe8</guid>
<link>https://academictorrents.com/details/b2b847b5e1946b0479baa838a0b0547178e5ebe8</link>
<description>2,909,551 news articles from the SogouCA and SogouCS news corpora, in 5 categories. The number of training samples selected for each class is 90,000 and testing 12,000. Note that the Chinese characters have been converted to Pinyin.</description>
<size>384269937</size>
</item><item>
<title>DBPedia ontology</title>
<category>Dataset</category>
<infohash>881118da3e05d63f91dbadf84317381203f3cb24</infohash>
<guid>https://academictorrents.com/details/881118da3e05d63f91dbadf84317381203f3cb24</guid>
<link>https://academictorrents.com/details/881118da3e05d63f91dbadf84317381203f3cb24</link>
<description>40,000 training samples and 5,000 testing samples from 14 nonoverlapping classes from DBpedia 2014.</description>
<size>68341743</size>
</item><item>
<title>Amazon reviews - Polarity</title>
<category>Dataset</category>
<infohash>db0cd5603a0d154ec3dcfc6ff7862d47d3884b83</infohash>
<guid>https://academictorrents.com/details/db0cd5603a0d154ec3dcfc6ff7862d47d3884b83</guid>
<link>https://academictorrents.com/details/db0cd5603a0d154ec3dcfc6ff7862d47d3884b83</link>
<description>34,686,770 Amazon reviews from 6,643,669 users on 2,441,053 products, from the Stanford Network Analysis Project (SNAP). This subset contains 1,800,000 training samples and 200,000 testing samples in each polarity sentiment.</description>
<size>688339454</size>
</item><item>
<title>Amazon reviews - Full</title>
<category>Dataset</category>
<infohash>66ddbb6d5f49aa6c36a01ca5e814f1beef00b5b7</infohash>
<guid>https://academictorrents.com/details/66ddbb6d5f49aa6c36a01ca5e814f1beef00b5b7</guid>
<link>https://academictorrents.com/details/66ddbb6d5f49aa6c36a01ca5e814f1beef00b5b7</link>
<description>34,686,770 Amazon reviews from 6,643,669 users on 2,441,053 products, from the Stanford Network Analysis Project (SNAP). This full dataset contains 600,000 training samples and 130,000 testing samples in each class.</description>
<size>643695014</size>
</item><item>
<title>AG News</title>
<category>Dataset</category>
<infohash>758bf646e3ffd39d20f9a3d9efbdb0e1eade5022</infohash>
<guid>https://academictorrents.com/details/758bf646e3ffd39d20f9a3d9efbdb0e1eade5022</guid>
<link>https://academictorrents.com/details/758bf646e3ffd39d20f9a3d9efbdb0e1eade5022</link>
<description>496,835 categorized news articles from &gt;2000 news sources from the 4 largest classes from AG’s corpus of news articles, using only the title and description fields. The number of training samples for each class is 30,000 and testing 1900.</description>
<size>11784419</size>
</item><item>
<title>WMT 2015 French/English parallel texts</title>
<category>Dataset</category>
<infohash>2bc57fed1ea43b24296e096aa8746f6faee9513e</infohash>
<guid>https://academictorrents.com/details/2bc57fed1ea43b24296e096aa8746f6faee9513e</guid>
<link>https://academictorrents.com/details/2bc57fed1ea43b24296e096aa8746f6faee9513e</link>
<description>French/English parallel texts for training translation models. Over 20 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other.</description>
<size>2598183296</size>
</item><item>
<title>Wikitext-2</title>
<category>Dataset</category>
<infohash>ac7ffa98b66427246a316a81b2ea31c9b58ea5b6</infohash>
<guid>https://academictorrents.com/details/ac7ffa98b66427246a316a81b2ea31c9b58ea5b6</guid>
<link>https://academictorrents.com/details/ac7ffa98b66427246a316a81b2ea31c9b58ea5b6</link>
<description>A subset of Wikitext-103; useful for testing language model training on smaller datasets.</description>
<size>4070055</size>
</item><item>
<title>Wikitext-103</title>
<category>Dataset</category>
<infohash>a4fee5547056c845e31ab952598f43b42333183c</infohash>
<guid>https://academictorrents.com/details/a4fee5547056c845e31ab952598f43b42333183c</guid>
<link>https://academictorrents.com/details/a4fee5547056c845e31ab952598f43b42333183c</link>
<description>A collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. Widely used for language modeling, including the pretrained models used in the fastai library and ULMFiT algorithm.</description>
<size>190200704</size>
</item><item>
<title>IMDb Large Movie Review Dataset</title>
<category>Dataset</category>
<infohash>fd24bc44d461b10288469e05a64a8344eb079f15</infohash>
<guid>https://academictorrents.com/details/fd24bc44d461b10288469e05a64a8344eb079f15</guid>
<link>https://academictorrents.com/details/fd24bc44d461b10288469e05a64a8344eb079f15</link>
<description>A dataset for binary sentiment classification containing 25,000 highly polarized movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well.</description>
<size>26402186</size>
</item><item>
<title>Caltech 101</title>
<category>Dataset</category>
<infohash>85572f9564eb41b9d2045cbd35d742b2e3c4949f</infohash>
<guid>https://academictorrents.com/details/85572f9564eb41b9d2045cbd35d742b2e3c4949f</guid>
<link>https://academictorrents.com/details/85572f9564eb41b9d2045cbd35d742b2e3c4949f</link>
<description>A 37 category pet dataset with roughly 200 images for each class. The images have a large variations in scale, pose and lighting. Can also be used for localization.</description>
<size>131740031</size>
</item><item>
<title>Caltech-UCSD Birds-200-2011</title>
<category>Dataset</category>
<infohash>1a5a206de443085ff07ca9a47530f1bfadf526ba</infohash>
<guid>https://academictorrents.com/details/1a5a206de443085ff07ca9a47530f1bfadf526ba</guid>
<link>https://academictorrents.com/details/1a5a206de443085ff07ca9a47530f1bfadf526ba</link>
<description>An image dataset with photos of 200 bird species (mostly North American); it can also be used for localization. Number of categories: 200; Number of images: 11,788; Annotations per image: 15 Part Locations, 312 Binary Attributes, 1 Bounding Box</description>
<size>1150585339</size>
</item><item>
<title>Oxford 102 Flowers</title>
<category>Dataset</category>
<infohash>9e3425fd8ba169e7ad499489152f06a6d5bb0be0</infohash>
<guid>https://academictorrents.com/details/9e3425fd8ba169e7ad499489152f06a6d5bb0be0</guid>
<link>https://academictorrents.com/details/9e3425fd8ba169e7ad499489152f06a6d5bb0be0</link>
<description>A 102 category dataset consisting of 102 flower categories, commonly occuring in the United Kingdom. Each class consists of 40 to 258 images. The images have large scale, pose and light variations.</description>
<size>345236087</size>
</item><item>
<title>Stanford Cars</title>
<category>Dataset</category>
<infohash>9c90b7f6208d430bff288845d45667ab2670da56</infohash>
<guid>https://academictorrents.com/details/9c90b7f6208d430bff288845d45667ab2670da56</guid>
<link>https://academictorrents.com/details/9c90b7f6208d430bff288845d45667ab2670da56</link>
<description>16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split. Classes are typically at the level of Make, Model, Year</description>
<size>1957803273</size>
</item><item>
<title>Oxford-IIIT Pet</title>
<category>Dataset</category>
<infohash>97d27cad2802552b20b0c9a29dbf246bca73608d</infohash>
<guid>https://academictorrents.com/details/97d27cad2802552b20b0c9a29dbf246bca73608d</guid>
<link>https://academictorrents.com/details/97d27cad2802552b20b0c9a29dbf246bca73608d</link>
<description>A 37 category pet dataset with roughly 200 images for each class. The images have a large variations in scale, pose and lighting. Can also be used for localization.</description>
<size>811706944</size>
</item><item>
<title>MNIST</title>
<category>Dataset</category>
<infohash>4d563087fb327739d7ec9ee9a0d32c4cb8b0355e</infohash>
<guid>https://academictorrents.com/details/4d563087fb327739d7ec9ee9a0d32c4cb8b0355e</guid>
<link>https://academictorrents.com/details/4d563087fb327739d7ec9ee9a0d32c4cb8b0355e</link>
<description>Classic dataset of small (28x28) handwritten grayscale digits, developed in the 1990s for testing the most sophisticated models of the day; today, often used as a basic “hello world” for introducing deep learning. This fast.ai datasets version uses a standard PNG format instead of the special binary format of the original, so you can use the regular data pipelines in most libraries; if you want to use just a single input channel like the original, simply pick a single slice from the channels axis</description>
<size>15683414</size>
</item><item>
<title>Food-101</title>
<category>Dataset</category>
<infohash>470791483f8441764d3b01dbc4d22b3aa58ef46f</infohash>
<guid>https://academictorrents.com/details/470791483f8441764d3b01dbc4d22b3aa58ef46f</guid>
<link>https://academictorrents.com/details/470791483f8441764d3b01dbc4d22b3aa58ef46f</link>
<description>101 food categories, with 101,000 images; 250 test images and 750 training images per class. The training images were not cleaned. All images were rescaled to have a maximum side length of 512 pixels.</description>
<size>5686607260</size>
</item><item>
<title>CIFAR100</title>
<category>Dataset</category>
<infohash>4fb115df73d3313fae9264fd6c0bad061add2d63</infohash>
<guid>https://academictorrents.com/details/4fb115df73d3313fae9264fd6c0bad061add2d63</guid>
<link>https://academictorrents.com/details/4fb115df73d3313fae9264fd6c0bad061add2d63</link>
<description>This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a “fine” label (the class to which it belongs) and a “coarse” label (the superclass to which it belongs).</description>
<size>169168619</size>
</item><item>
<title>CIFAR10</title>
<category>Dataset</category>
<infohash>e5834a981a7337f81fe5bad6c890c949c73ca30c</infohash>
<guid>https://academictorrents.com/details/e5834a981a7337f81fe5bad6c890c949c73ca30c</guid>
<link>https://academictorrents.com/details/e5834a981a7337f81fe5bad6c890c949c73ca30c</link>
<description>60000 32x32 colour images in 10 classes, with 6000 images per class (50000 training images and 10000 test images). Very widely used today for testing performance of new algorithms. This fast.ai datasets version uses a standard PNG format instead of the platform-specific binary formats of the original, so you can use the regular data pipelines in most libraries</description>
<size>135107811</size>
</item><item>
<title>Non-contrast head/brain CT CQ500 Dataset</title>
<category>Dataset</category>
<infohash>47e9d8aab761e75fd0a81982fa62bddf3a173831</infohash>
<guid>https://academictorrents.com/details/47e9d8aab761e75fd0a81982fa62bddf3a173831</guid>
<link>https://academictorrents.com/details/47e9d8aab761e75fd0a81982fa62bddf3a173831</link>
<description>CQ500 dataset of 491 Computed tomography scans with 193,317 slices Anonymized dicoms for all the scans and the corresponding radiologists  reads. ![](https://i.imgur.com/wor2XEA.png) Paper: https://arxiv.org/abs/1803.05854</description>
<size>28660285880</size>
</item><item>
<title>MICCAI_BraTS_2018_Data_Validation</title>
<category>Dataset</category>
<infohash>a5912da845c7d7bec9bd0880c17ddda789ba35d5</infohash>
<guid>https://academictorrents.com/details/a5912da845c7d7bec9bd0880c17ddda789ba35d5</guid>
<link>https://academictorrents.com/details/a5912da845c7d7bec9bd0880c17ddda789ba35d5</link>
<description/>
<size>671065146</size>
</item><item>
<title>MICCAI_BraTS_2018_Data_Training</title>
<category>Dataset</category>
<infohash>a9e2741587d42ef6139aa474a95858a17952b3a5</infohash>
<guid>https://academictorrents.com/details/a9e2741587d42ef6139aa474a95858a17952b3a5</guid>
<link>https://academictorrents.com/details/a9e2741587d42ef6139aa474a95858a17952b3a5</link>
<description>https://i.imgur.com/iONFbKt.gif     ./HGG/Brats18_CBICA_AOO_1 ./HGG/Brats18_TCIA02_471_1 ./HGG/Brats18_CBICA_ARW_1 ./HGG/Brats18_CBICA_ASK_1 ./HGG/Brats18_TCIA08_105_1 ./HGG/Brats18_CBICA_AAB_1 ./HGG/Brats18_CBICA_ATV_1 ./HGG/Brats18_CBICA_ASA_1 ./HGG/Brats18_CBICA_AQN_1 ./HGG/Brats18_TCIA04_192_1 ./HGG/Brats18_2013_20_1 ./HGG/Brats18_TCIA01_147_1 ./HGG/Brats18_CBICA_APR_1 ./HGG/Brats18_TCIA02_321_1 ./HGG/Brats18_CBICA_AQD_1 ./HGG/Brats18_CBICA_ALX_1 ./HGG/Brats18_TCIA08_205_1 ./HGG/Brats18_CBICA_AQJ_1 ./HGG/Brats18_TCIA01_203_1 ./HGG/Brats18_2013_2_1 ./HGG/Brats18_CBICA_AUN_1 ./HGG/Brats18_TCIA02_300_1 ./HGG/Brats18_CBICA_ASE_1 ./HGG/Brats18_CBICA_ASO_1 ./HGG/Brats18_CBICA_ATX_1 ./HGG/Brats18_CBICA_AAL_1 ./HGG/Brats18_TCIA03_419_1 ./HGG/Brats18_CBICA_AXL_1 ./HGG/Brats18_CBICA_AQV_1 ./HGG/Brats18_CBICA_ALN_1 ./HGG/Brats18_TCIA06_165_1 ./HGG/Brats18_TCIA03_338_1 ./HGG/Brats18_TCIA01_378_1 ./HGG/Brats18_CBICA_ASY_1 ./HGG/Brats18_CBICA_AUR_1 ./HGG/Brats18_TCIA02_198_1 ./HGG/Brats18_2013_11_1 ./HGG/Brats18_CBICA_AAP_1 ./HGG/Brats18_CBICA_ATD_1 ./HGG/Brats18_TCIA02_171_1 ./HGG/Brats18_CBICA_ASW_1 ./HGG/Brats18_TCIA08_234_1 ./HGG/Brats18_TCIA03_257_1 ./HGG/Brats18_TCIA02_314_1 ./HGG/Brats18_CBICA_ABM_1 ./HGG/Brats18_TCIA03_138_1 ./HGG/Brats18_TCIA04_361_1 ./HGG/Brats18_CBICA_AQR_1 ./HGG/Brats18_TCIA05_478_1 ./HGG/Brats18_TCIA02_331_1 ./HGG/Brats18_TCIA02_135_1 ./HGG/Brats18_2013_23_1 ./HGG/Brats18_TCIA06_409_1 ./HGG/Brats18_2013_5_1 ./HGG/Brats18_CBICA_ALU_1 ./HGG/Brats18_CBICA_AYA_1 ./HGG/Brats18_CBICA_AQG_1 ./HGG/Brats18_TCIA02_322_1 ./HGG/Brats18_TCIA03_121_1 ./HGG/Brats18_CBICA_AXW_1 ./HGG/Brats18_CBICA_ANP_1 ./HGG/Brats18_TCIA01_186_1 ./HGG/Brats18_TCIA01_221_1 ./HGG/Brats18_TCIA08_167_1 ./HGG/Brats18_CBICA_ASH_1 ./HGG/Brats18_TCIA02_222_1 ./HGG/Brats18_TCIA05_444_1 ./HGG/Brats18_TCIA02_168_1 ./HGG/Brats18_TCIA03_498_1 ./HGG/Brats18_CBICA_ANZ_1 ./HGG/Brats18_TCIA06_332_1 ./HGG/Brats18_TCIA08_242_1 ./HGG/Brats18_CBICA_ARZ_1 ./HGG/Brats18_TCIA02_368_1 ./HGG/Brats18_CBICA_AOH_1 ./HGG/Brats18_TCIA02_226_1 ./HGG/Brats18_TCIA02_491_1 ./HGG/Brats18_TCIA02_309_1 ./HGG/Brats18_CBICA_AWH_1 ./HGG/Brats18_TCIA06_603_1 ./HGG/Brats18_TCIA08_280_1 ./HGG/Brats18_TCIA03_265_1 ./HGG/Brats18_TCIA05_396_1 ./HGG/Brats18_TCIA06_372_1 ./HGG/Brats18_2013_27_1 ./HGG/Brats18_TCIA02_117_1 ./HGG/Brats18_TCIA08_469_1 ./HGG/Brats18_CBICA_ARF_1 ./HGG/Brats18_CBICA_AUQ_1 ./HGG/Brats18_2013_18_1 ./HGG/Brats18_TCIA01_235_1 ./HGG/Brats18_TCIA03_375_1 ./HGG/Brats18_TCIA04_343_1 ./HGG/Brats18_2013_12_1 ./HGG/Brats18_TCIA04_328_1 ./HGG/Brats18_TCIA01_131_1 ./HGG/Brats18_TCIA06_247_1 ./HGG/Brats18_CBICA_AXO_1 ./HGG/Brats18_TCIA01_335_1 ./HGG/Brats18_TCIA01_150_1 ./HGG/Brats18_CBICA_BHM_1 ./HGG/Brats18_CBICA_AQU_1 ./HGG/Brats18_CBICA_AQQ_1 ./HGG/Brats18_CBICA_ABN_1 ./HGG/Brats18_TCIA01_425_1 ./HGG/Brats18_TCIA03_296_1 ./HGG/Brats18_TCIA08_218_1 ./HGG/Brats18_CBICA_AYW_1 ./HGG/Brats18_CBICA_AOP_1 ./HGG/Brats18_CBICA_AZD_1 ./HGG/Brats18_TCIA01_231_1 ./HGG/Brats18_TCIA04_149_1 ./HGG/Brats18_TCIA08_406_1 ./HGG/Brats18_CBICA_AOZ_1 ./HGG/Brats18_TCIA02_607_1 ./HGG/Brats18_CBICA_ABY_1 ./HGG/Brats18_CBICA_APZ_1 ./HGG/Brats18_2013_4_1 ./HGG/Brats18_CBICA_AMH_1 ./HGG/Brats18_CBICA_AWG_1 ./HGG/Brats18_2013_22_1 ./HGG/Brats18_TCIA01_411_1 ./HGG/Brats18_TCIA03_474_1 ./HGG/Brats18_TCIA02_473_1 ./HGG/Brats18_TCIA08_162_1 ./HGG/Brats18_CBICA_ASG_1 ./HGG/Brats18_CBICA_ATP_1 ./HGG/Brats18_TCIA01_499_1 ./HGG/Brats18_TCIA01_201_1 ./HGG/Brats18_2013_26_1 ./HGG/Brats18_TCIA08_436_1 ./HGG/Brats18_TCIA02_208_1 ./HGG/Brats18_CBICA_AWI_1 ./HGG/Brats18_TCIA02_608_1 ./HGG/Brats18_2013_13_1 ./HGG/Brats18_CBICA_ATF_1 ./HGG/Brats18_TCIA02_394_1 ./HGG/Brats18_CBICA_ANI_1 ./HGG/Brats18_2013_19_1 ./HGG/Brats18_TCIA04_437_1 ./HGG/Brats18_TCIA08_113_1 ./HGG/Brats18_CBICA_AQT_1 ./HGG/Brats18_CBICA_AXN_1 ./HGG/Brats18_TCIA04_479_1 ./HGG/Brats18_TCIA02_290_1 ./HGG/Brats18_CBICA_AXJ_1 ./HGG/Brats18_CBICA_AQZ_1 ./HGG/Brats18_CBICA_BHB_1 ./HGG/Brats18_CBICA_ABE_1 ./HGG/Brats18_TCIA08_278_1 ./HGG/Brats18_TCIA06_184_1 ./HGG/Brats18_CBICA_ABO_1 ./HGG/Brats18_CBICA_AQP_1 ./HGG/Brats18_CBICA_AVG_1 ./HGG/Brats18_TCIA01_401_1 ./HGG/Brats18_TCIA02_606_1 ./HGG/Brats18_TCIA03_199_1 ./HGG/Brats18_TCIA02_179_1 ./HGG/Brats18_CBICA_ANG_1 ./HGG/Brats18_TCIA02_377_1 ./HGG/Brats18_CBICA_ATB_1 ./HGG/Brats18_TCIA01_460_1 ./HGG/Brats18_2013_17_1 ./HGG/Brats18_CBICA_ASU_1 ./HGG/Brats18_TCIA08_319_1 ./HGG/Brats18_TCIA02_118_1 ./HGG/Brats18_TCIA01_412_1 ./HGG/Brats18_CBICA_AOD_1 ./HGG/Brats18_TCIA05_277_1 ./HGG/Brats18_CBICA_AYI_1 ./HGG/Brats18_TCIA02_283_1 ./HGG/Brats18_CBICA_APY_1 ./HGG/Brats18_TCIA02_455_1 ./HGG/Brats18_CBICA_BFP_1 ./HGG/Brats18_2013_7_1 ./HGG/Brats18_2013_21_1 ./HGG/Brats18_CBICA_AQO_1 ./HGG/Brats18_2013_3_1 ./HGG/Brats18_2013_25_1 ./HGG/Brats18_CBICA_AXQ_1 ./HGG/Brats18_CBICA_AME_1 ./HGG/Brats18_TCIA02_430_1 ./HGG/Brats18_TCIA04_111_1 ./HGG/Brats18_CBICA_AQA_1 ./HGG/Brats18_CBICA_AVV_1 ./HGG/Brats18_TCIA06_211_1 ./HGG/Brats18_CBICA_ASN_1 ./HGG/Brats18_TCIA01_180_1 ./HGG/Brats18_CBICA_AAG_1 ./HGG/Brats18_TCIA02_151_1 ./HGG/Brats18_TCIA01_429_1 ./HGG/Brats18_CBICA_BFB_1 ./HGG/Brats18_CBICA_AXM_1 ./HGG/Brats18_TCIA01_448_1 ./HGG/Brats18_CBICA_AVJ_1 ./HGG/Brats18_CBICA_ABB_1 ./HGG/Brats18_TCIA02_370_1 ./HGG/Brats18_2013_10_1 ./HGG/Brats18_TCIA01_190_1 ./HGG/Brats18_CBICA_AZH_1 ./HGG/Brats18_TCIA02_605_1 ./HGG/Brats18_TCIA01_390_1 ./HGG/Brats18_CBICA_ASV_1 ./HGG/Brats18_2013_14_1 ./HGG/Brats18_TCIA02_374_1 ./HGG/Brats18_CBICA_AYU_1 ./HGG/Brats18_TCIA02_274_1 ./HGG/Brats18_TCIA03_133_1 ./HGG/Brats18_CBICA_AQY_1 ./HGG/Brats18_CBICA_BHK_1 ./LGG/Brats18_TCIA10_639_1 ./LGG/Brats18_TCIA13_630_1 ./LGG/Brats18_2013_6_1 ./LGG/Brats18_TCIA13_615_1 ./LGG/Brats18_2013_8_1 ./LGG/Brats18_2013_24_1 ./LGG/Brats18_TCIA10_490_1 ./LGG/Brats18_TCIA10_637_1 ./LGG/Brats18_TCIA13_634_1 ./LGG/Brats18_TCIA10_346_1 ./LGG/Brats18_TCIA10_202_1 ./LGG/Brats18_TCIA09_312_1 ./LGG/Brats18_TCIA13_624_1 ./LGG/Brats18_TCIA10_442_1 ./LGG/Brats18_TCIA10_152_1 ./LGG/Brats18_TCIA13_645_1 ./LGG/Brats18_TCIA09_177_1 ./LGG/Brats18_TCIA10_629_1 ./LGG/Brats18_2013_15_1 ./LGG/Brats18_TCIA09_402_1 ./LGG/Brats18_TCIA10_408_1 ./LGG/Brats18_TCIA12_480_1 ./LGG/Brats18_2013_29_1 ./LGG/Brats18_TCIA10_241_1 ./LGG/Brats18_TCIA13_633_1 ./LGG/Brats18_TCIA09_493_1 ./LGG/Brats18_TCIA12_101_1 ./LGG/Brats18_TCIA12_470_1 ./LGG/Brats18_TCIA13_618_1 ./LGG/Brats18_TCIA09_451_1 ./LGG/Brats18_TCIA10_387_1 ./LGG/Brats18_TCIA09_141_1 ./LGG/Brats18_2013_1_1 ./LGG/Brats18_TCIA09_255_1 ./LGG/Brats18_TCIA10_130_1 ./LGG/Brats18_TCIA10_420_1 ./LGG/Brats18_TCIA10_393_1 ./LGG/Brats18_TCIA09_620_1 ./LGG/Brats18_TCIA10_351_1 ./LGG/Brats18_TCIA10_299_1 ./LGG/Brats18_TCIA13_642_1 ./LGG/Brats18_2013_16_1 ./LGG/Brats18_TCIA10_330_1 ./LGG/Brats18_TCIA13_623_1 ./LGG/Brats18_2013_28_1 ./LGG/Brats18_TCIA10_410_1 ./LGG/Brats18_TCIA10_282_1 ./LGG/Brats18_TCIA13_653_1 ./LGG/Brats18_TCIA10_261_1 ./LGG/Brats18_TCIA10_325_1 ./LGG/Brats18_2013_0_1 ./LGG/Brats18_TCIA12_298_1 ./LGG/Brats18_TCIA10_644_1 ./LGG/Brats18_TCIA10_625_1 ./LGG/Brats18_TCIA09_254_1 ./LGG/Brats18_TCIA10_175_1 ./LGG/Brats18_TCIA10_310_1 ./LGG/Brats18_TCIA10_640_1 ./LGG/Brats18_TCIA10_266_1 ./LGG/Brats18_TCIA10_632_1 ./LGG/Brats18_TCIA13_650_1 ./LGG/Brats18_TCIA10_307_1 ./LGG/Brats18_TCIA10_103_1 ./LGG/Brats18_TCIA10_413_1 ./LGG/Brats18_TCIA10_109_1 ./LGG/Brats18_TCIA12_249_1 ./LGG/Brats18_2013_9_1 ./LGG/Brats18_TCIA13_654_1 ./LGG/Brats18_TCIA09_428_1 ./LGG/Brats18_TCIA10_449_1 ./LGG/Brats18_TCIA13_621_1 ./LGG/Brats18_TCIA10_276_1 ./LGG/Brats18_TCIA09_462_1 ./LGG/Brats18_TCIA10_628_1 ./LGG/Brats18_TCIA12_466_1</description>
<size>2325982089</size>
</item><item>
<title>CS231n: Convolutional Neural Networks Spring 2017</title>
<category>Course</category>
<infohash>ed8a16ebb346e14119a03371665306609e485f13</infohash>
<guid>https://academictorrents.com/details/ed8a16ebb346e14119a03371665306609e485f13</guid>
<link>https://academictorrents.com/details/ed8a16ebb346e14119a03371665306609e485f13</link>
<description>Stanford course on Convolutional Neural Networks for Visual Recognition # Course Description Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. The final assignment will involve training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet). We will focus on teaching how to set up the problem of image recognition, the learning algorithms (e.g. backpropagation), practical engineering tricks for training and fine-tuning the networks and guide the students through hands-on assignments and a final course project. Much of the background and materials of this course will be drawn from the ImageNet Challenge. https://i.imgur.com/ps0x3Wo.png</description>
<size>2625171165</size>
</item><item>
<title>Medical Segmentation Decathlon Datasets</title>
<category>Dataset</category>
<infohash>274be65156ed14828fb7b30b82407a2417e1924a</infohash>
<guid>https://academictorrents.com/details/274be65156ed14828fb7b30b82407a2417e1924a</guid>
<link>https://academictorrents.com/details/274be65156ed14828fb7b30b82407a2417e1924a</link>
<description>https://i.imgur.com/QqgA5n4.jpg With recent advances in machine learning, semantic segmentation algorithms are becoming increasingly general purpose and translatable to unseen tasks. Many key algorithmic advances in the field of medical imaging are commonly validated on a small number of tasks, limiting our understanding of the generalisability of the proposed contributions. A model which works out-of-the-box on many tasks, in the spirit of AutoML, would have a tremendous impact on healthcare. The field of medical imaging is also missing a fully open source and comprehensive benchmark for general purpose algorithmic validation and testing covering a large span of challenges, such as: small data, unbalanced labels, large-ranging object scales, multi-class labels, and multimodal imaging, etc. This challenge and dataset aims to provide such resource thorugh the open sourcing of large medical imaging datasets on several highly different tasks, and by standardising the analysis and validation process.     4.6M    ./Task06_Lung/labelsTr 5.7G    ./Task06_Lung/imagesTr 2.9G    ./Task06_Lung/imagesTs 8.6G    ./Task06_Lung 240K    ./Task05_Prostate/labelsTr 150M    ./Task05_Prostate/imagesTr 79M     ./Task05_Prostate/imagesTs 229M    ./Task05_Prostate 15M     ./Task01_BrainTumour/labelsTr 4.5G    ./Task01_BrainTumour/imagesTr 2.7G    ./Task01_BrainTumour/imagesTs 7.1G    ./Task01_BrainTumour 8.6M    ./Task07_Pancreas/labelsTr 7.6G    ./Task07_Pancreas/imagesTr 3.9G    ./Task07_Pancreas/imagesTs 12G     ./Task07_Pancreas 388K    ./Task02_Heart/labelsTr 249M    ./Task02_Heart/imagesTr 186M    ./Task02_Heart/imagesTs 435M    ./Task02_Heart 8.7M    ./Task08_HepaticVessel/labelsTr 5.8G    ./Task08_HepaticVessel/imagesTr 3.0G    ./Task08_HepaticVessel/imagesTs 8.8G    ./Task08_HepaticVessel 1.3M    ./Task09_Spleen/labelsTr 1.1G    ./Task09_Spleen/imagesTr 461M    ./Task09_Spleen/imagesTs 1.5G    ./Task09_Spleen 14M     ./Task10_Colon/labelsTr 4.0G    ./Task10_Colon/imagesTr 1.9G    ./Task10_Colon/imagesTs 5.9G    ./Task10_Colon 30M     ./Task03_Liver/labelsTr 19G     ./Task03_Liver/imagesTr 8.6G    ./Task03_Liver/imagesTs 27G     ./Task03_Liver 1.1M    ./Task04_Hippocampus/labelsTr 19M     ./Task04_Hippocampus/imagesTr 8.8M    ./Task04_Hippocampus/imagesTs 29M     ./Task04_Hippocampus 71G     .     Competition site: https://decathlon-10.grand-challenge.org/</description>
<size>75906970628</size>
</item><item>
<title>Ingenieria de software enfoque practico [pdf]</title>
<category>Paper</category>
<infohash>57e49a19f4af21e9c139a893ecfa56f0c11c2535</infohash>
<guid>https://academictorrents.com/details/57e49a19f4af21e9c139a893ecfa56f0c11c2535</guid>
<link>https://academictorrents.com/details/57e49a19f4af21e9c139a893ecfa56f0c11c2535</link>
<description>Spanish book</description>
<size>7003461</size>
</item><item>
<title>nyu_depth_v2_labeled.mat</title>
<category>Dataset</category>
<infohash>47a9a46bb784b394e398228d4c85a8d61d01dfa8</infohash>
<guid>https://academictorrents.com/details/47a9a46bb784b394e398228d4c85a8d61d01dfa8</guid>
<link>https://academictorrents.com/details/47a9a46bb784b394e398228d4c85a8d61d01dfa8</link>
<description>The labeled dataset is a subset of the Raw Dataset. It is comprised of pairs of RGB and Depth frames that have been synchronized and annotated with dense labels for every image. In addition to the projected depth maps, we have included a set of preprocessed depth maps whose missing values have been filled in using the colorization scheme of Levin et al. Unlike, the Raw dataset, the labeled dataset is provided as a Matlab .mat file with the following variables: accelData – Nx4 matrix of accelerometer values indicated when each frame was taken. The columns contain the roll, yaw, pitch and tilt angle of the device. depths – HxWxN matrix of in-painted depth maps where H and W are the height and width, respectively and N is the number of images. The values of the depth elements are in meters. images – HxWx3xN matrix of RGB images where H and W are the height and width, respectively, and N is the number of images. instances – HxWxN matrix of instance maps. Use get_instance_masks.m in the Toolbox to recover masks for each object instance in a scene. labels – HxWxN matrix of object label masks where H and W are the height and width, respectively and N is the number of images. The labels range from 1..C where C is the total number of classes. If a pixel’s label value is 0, then that pixel is ‘unlabeled’. names – Cx1 cell array of the english names of each class. namesToIds – map from english label names to class IDs (with C key-value pairs) rawDepths – HxWxN matrix of raw depth maps where H and W are the height and width, respectively, and N is the number of images. These depth maps capture the depth images after they have been projected onto the RGB image plane but before the missing depth values have been filled in. Additionally, the depth non-linearity from the Kinect device has been removed and the values of each depth image are in meters. rawDepthFilenames – Nx1 cell array of the filenames (in the Raw dataset) that were used for each of the depth images in the labeled dataset. rawRgbFilenames – Nx1 cell array of the filenames (in the Raw dataset) that were used for each of the RGB images in the labeled dataset. scenes – Nx1 cell array of the name of the scene from which each image was taken. sceneTypes – Nx1 cell array of the scene type from which each image was taken.</description>
<size>2971894459</size>
</item><item>
<title>CamVid - The Cambridge-driving Labeled Video Database</title>
<category>Dataset</category>
<infohash>a6431e7dd33615194c5936fd8a35db043ab51058</infohash>
<guid>https://academictorrents.com/details/a6431e7dd33615194c5936fd8a35db043ab51058</guid>
<link>https://academictorrents.com/details/a6431e7dd33615194c5936fd8a35db043ab51058</link>
<description>The Cambridge-driving Labeled Video Database (CamVid) is the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object classes. Over ten minutes of high quality 30Hz footage is being provided, with corresponding semantically labeled images at 1Hz and in part, 15Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we offer custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluated the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation. ![](https://i.imgur.com/q5UYxWa.png) #### Citation Request Segmentation and Recognition Using Structure from Motion Point Clouds, ECCV 2008 Brostow, Shotton, Fauqueur, Cipolla Semantic Object Classes in Video: A High-Definition Ground Truth Database Pattern Recognition Letters Brostow, Fauqueur, Cipolla</description>
<size>16567942</size>
</item><item>
<title>What is Gab? A Bastion of Free Speech or an Alt-Right Echo Chamber?</title>
<category>Dataset</category>
<infohash>3e16a88e26e19c131c282438be5d5a39242be05d</infohash>
<guid>https://academictorrents.com/details/3e16a88e26e19c131c282438be5d5a39242be05d</guid>
<link>https://academictorrents.com/details/3e16a88e26e19c131c282438be5d5a39242be05d</link>
<description/>
<size>6007232745</size>
</item><item>
<title>MoNuSeg Training Data - Multi-organ nuclei segmentation from H&amp;E stained histopathological images</title>
<category>Dataset</category>
<infohash>c87688437fb416f66eecbd8c419aba00dd12997f</infohash>
<guid>https://academictorrents.com/details/c87688437fb416f66eecbd8c419aba00dd12997f</guid>
<link>https://academictorrents.com/details/c87688437fb416f66eecbd8c419aba00dd12997f</link>
<description>Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology.  Techniques that accurately segment nuclei in diverse images spanning a range of patients, organs, and disease states, can significantly contribute to the development of clinical and medical research software. Once accurately segmented, nuclear morphometric and appearance features such as density, nucleus-to-cytoplasm ratio, average size, and pleomorphism can be used to assess not only cancer grades but also for predicting treatment effectiveness. Identifying different types of nuclei based on their segmentation can also yield information about gland shapes, which, for example, is important for cancer grading. This challenge will showcase the best nuclei segmentation techniques that will work on a diverse set of H&amp;E stained histology images obtained from different hospitals spanning multiple patients and organs. This will enable training and testing of readily usable  (or generalized) nuclear segmentation softwares. The dataset for this challenge was obtained by carefully annotating tissue images of several patients with tumors of different organs and who were diagnosed at multiple hospitals. This dataset was created by downloading H&amp;E stained tissue images captured at 40x magnification from TCGA archive. H&amp;E staining is a routine protocol to enhance the contrast of a tissue section and is commonly used for tumor assessment (grading, staging, etc.). Given the diversity of nuclei appearances across multiple organs and patients, and the richness of staining protocols adopted at multiple hospitals, the training datatset will enable the development of robust and generalizable nuclei segmentation techniques that will work right out of the box. ![](https://i.imgur.com/2p2GMWt.png) #### Citation Request N. Kumar, R. Verma, S. Sharma, S. Bhargava, A. Vahadane and A. Sethi, "A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology," in IEEE Transactions on Medical Imaging, vol. 36, no. 7, pp. 1550-1560, July 2017</description>
<size>142306116</size>
</item><item>
<title>[Coursera] Case-Based Introduction to Biostatistics</title>
<category>Course</category>
<infohash>82d20fe4c1bf8f6a14e71fcae052b3f6ea014e4c</infohash>
<guid>https://academictorrents.com/details/82d20fe4c1bf8f6a14e71fcae052b3f6ea014e4c</guid>
<link>https://academictorrents.com/details/82d20fe4c1bf8f6a14e71fcae052b3f6ea014e4c</link>
<description>Introductory biostatistics course taught by Dr Scott L. Zeger of John Hopkins Bloomberg School of Public Health. This course teaches key statistical principles and some methods that will enable you to reason more carefully and critically about scientific questions. The course is case based so that each principle or method will be introduced through a case study. This course is designed around a few important health questions.</description>
<size>312088103</size>
</item><item>
<title>Holistic Recognition of Low Quality License Plates (HDR dataset)</title>
<category>Dataset</category>
<infohash>8ed33d02d6b36c389dd077ea2478cc83ad117ef3</infohash>
<guid>https://academictorrents.com/details/8ed33d02d6b36c389dd077ea2478cc83ad117ef3</guid>
<link>https://academictorrents.com/details/8ed33d02d6b36c389dd077ea2478cc83ad117ef3</link>
<description>This dataset focuses on recognition of license plates in low resolution and low quality images. ![](https://i.imgur.com/4y2lGaX.png) ### Citation Request J. Špaňhel, J. Sochor, R. Juránek, A. Herout, L. Maršík and P. Zemčík, "Holistic recognition of low quality license plates by CNN using track annotated data," 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, 2017, pp. 1-6. doi: 10.1109/AVSS.2017.8078501</description>
<size>65883722</size>
</item><item>
<title>cataracts-2018-test</title>
<category>Dataset</category>
<infohash>67e1713d9be71af6e50246479366c9288f0c6b21</infohash>
<guid>https://academictorrents.com/details/67e1713d9be71af6e50246479366c9288f0c6b21</guid>
<link>https://academictorrents.com/details/67e1713d9be71af6e50246479366c9288f0c6b21</link>
<description>Surgical tool detection is attracting increasing attention from the medical image processing community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and real-time decision support. In 2017, we organized a challenge on surgical tool detection in cataract surgery videos. With 14 participating teams, that first edition of CATARACTS was a success. Therefore, we decided to repeat this experience in 2018 with new data and new technical challenges. In particular, the 2018 edition provides two synchronized video streams per surgery: one showing the patient s eye (like in the 2017 edition), the other one showing the surgical tray. By knowing which tools exit or enter the surgical tray, we know which tools are likely being used by the surgeon and which tools surely are not. This second edition of CATARACTS is organized as a sub-challenge of the MICCAI 2018 EndoVis challenge. It should be noted that CATARACTS does not rely on endoscopic videos, but rather on microscopic videos. However, these two modalities share many similarities that justify a joint event. The training set of CATARACTS 2018 was uploaded a few months ago. This is the test set.</description>
<size>362441328640</size>
</item><item>
<title>Spoken Wikipedia 2018</title>
<category>Dataset</category>
<infohash>5d2a7304089a97cecb5de3f055495ed65013c968</infohash>
<guid>https://academictorrents.com/details/5d2a7304089a97cecb5de3f055495ed65013c968</guid>
<link>https://academictorrents.com/details/5d2a7304089a97cecb5de3f055495ed65013c968</link>
<description>Spoken Wikipedia 2005-2018 in MP3 format &amp;mdash;- en/ English Spoken Wikipedia There are 857 (of 1,300) spoken articles in English. See list of audio articles: http://en.wikipedia.org/wiki/Wikipedia:Spoken_articles &amp;mdash;- ru/ Russian Spoken Wikipedia There are 238 spoken articles in Russian. See list of audio articles: http://ru.wikipedia.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D0%BF%D0%B5%D0%B4%D0%B8%D1%8F:%D0%A1%D0%BF%D0%B8%D1%81%D0%BE%D0%BA_%D0%B0%D1%83%D0%B4%D0%B8%D0%BE%D1%81%D1%82%D0%B0%D1%82%D0%B5%D0%B9 &amp;mdash;- Bonus: + uk/ 1 audio article of Ukrainian Wikipedia (Music of Ukraine); + de/ 3 audio articles of German Wikipedia (Alexander Pushkin, T-60, Afghanischer Burgerkrieg (1989-2001)). P.S. If you want, I can add the torrent with the same files in OGG format.</description>
<size>16833686963</size>
</item><item>
<title>VizWiz v1.0 dataset (Answering Visual Questions from Blind People)</title>
<category>Dataset</category>
<infohash>b633e14aa084fab57f20ad0b4612e0932ae1f2dc</infohash>
<guid>https://academictorrents.com/details/b633e14aa084fab57f20ad0b4612e0932ae1f2dc</guid>
<link>https://academictorrents.com/details/b633e14aa084fab57f20ad0b4612e0932ae1f2dc</link>
<description>We propose an artificial intelligence challenge to design algorithms that assist people who are blind to overcome their daily visual challenges. For this purpose, we introduce the VizWiz dataset, which originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. Our proposed challenge addresses the following two tasks for this dataset: (1) predict the answer to a visual question and (2) predict whether a visual question cannot be answered. Ultimately, we hope this work will educate more people about the technological needs of blind people while providing an exciting new opportunity for researchers to develop assistive technologies that eliminate accessibility barriers for blind people.     VizWiz v1.0 dataset download: 20,000 training image/question pairs 200,000 training answer/answer confidence pairs 3,173 image/question pairs 31,730 validation answer/answer confidence pairs 8,000 image/question pairs Python API to read and visualize the VizWiz dataset Python challenge evaluation code     ![](https://i.imgur.com/zXB6Qci.png) ### Publications Danna Gurari, Qing Li, Abigale J. Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P. Bigham. "VizWiz Grand Challenge: Answering Visual Questions from Blind People." IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samuel White, and Tom Yeh. "VizWiz: Nearly Real-time Answers to Visual Questions." ACM User Interface Software and Technology Symposium (UIST), 2010.</description>
<size>15394669439</size>
</item><item>
<title>Human MCF7 cells – compound-profiling experiment (BBBC021v1)</title>
<category>Dataset</category>
<infohash>014980e8a505760ed4c33641ac7e603d6e1778f4</infohash>
<guid>https://academictorrents.com/details/014980e8a505760ed4c33641ac7e603d6e1778f4</guid>
<link>https://academictorrents.com/details/014980e8a505760ed4c33641ac7e603d6e1778f4</link>
<description>![](https://i.imgur.com/nilVHAT.png) ![](https://i.imgur.com/8dcAHs0.png) ### Description of the biological application Phenotypic profiling attempts to summarize multiparametric, feature-based analysis of cellular phenotypes of each sample so that similarities between profiles reflect similarities between samples. Profiling is well established for biological readouts such as transcript expression and proteomics. Image-based profiling, however, is still an emerging technology. This image set provides a basis for testing image-based profiling methods wrt. to their ability to predict the mechanisms of action of a compendium of drugs. The image set was collected using a typical set of morphological labels and uses a physiologically relevant p53-wildtype breast-cancer model system (MCF-7) and a mechanistically distinct set of targeted and cancer-relevant cytotoxic compounds that induces a broad range of gross and subtle phenotypes. ### Images The images are of MCF-7 breast cancer cells treated for 24 h with a collection of 113 small molecules at eight concentrations. The cells were fixed, labeled for DNA, F-actin, and Β-tubulin, and imaged by fluorescent microscopy as described [Caie et al. Molecular Cancer Therapeutics, 2010]. There are 39,600 image files (13,200 fields of view imaged in three channels) in TIFF format. We provide the images in 55 ZIP archives, one for each microtiter plate. ### Metadata The file BBBC021_v1_image.csv contains the metadata, with the following fields:     TableNumber ImageNumber Image_FileName_DAPI Image_PathName_DAPI Image_FileName_Tubulin Image_PathName_Tubulin Image_FileName_Actin Image_PathName_Actin Image_Metadata_Plate_DAPI Image_Metadata_Well_DAPI Replicate Image_Metadata_Compound Image_Metadata_Concentration     ### Ground truth B A subset of the compound-concentrations have been identified as clearly having one of 12 different primary mechanims of action. mechanistic classes were selected so as to represent a wide cross-section of cellular morphological phenotypes. The differences between phenotypes were in some cases very subtle: we were only able to identify 6 of the 12 mechanisms visually; the remainder were defined based on the literature. The file BBBC021_v1_moa.csv contains the mechanisms of action of 103 compound-concentrations (38 compounds at 1–7 concentrations each). The fields are:     compound concentration moa     ### Recommended citation "We used image set BBBC021v1 [Caie et al., Molecular Cancer Therapeutics, 2010], available from the Broad Bioimage Benchmark Collection [Ljosa et al., Nature Methods, 2012]."</description>
<size>45688728298</size>
</item><item>
<title>wold-v1.1.zip</title>
<category>Paper</category>
<infohash>aacd714830ef4ab65ce00686258693504bd2478a</infohash>
<guid>https://academictorrents.com/details/aacd714830ef4ab65ce00686258693504bd2478a</guid>
<link>https://academictorrents.com/details/aacd714830ef4ab65ce00686258693504bd2478a</link>
<description>World Loanword Database v1.1. "It provides vocabularies (mini-dictionaries of about 1000-2000 entries) of 41 languages from around the world, with comprehensive information about the loanword status of each word. It allows users to find loanwords, source words and donor languages in each of the 41 languages, but also makes it easy to compare loanwords across languages. Each vocabulary was contributed by an expert on the language and its history. An accompanying book has been published by De Gruyter Mouton ( Loanwords in the World s Languages: A Comparative Handbook, edited by Martin Haspelmath &amp; Uri Tadmor). The World Loanword Database consists of vocabularies contributed by 41 different authors or author teams. When citing material from the database, please cite the corresponding vocabulary (or vocabularies). The database can be accessed by language, by meaning or by author. The World Loanword Database is the result of a collaborative project coordinated by Uri Tadmor and Martin Haspelmath between 2004 and 2008, called the Loanword Typology Project (LWT). Most of the contributors took part in workshops at which the procedures for selecting and annotating words were discussed extensively. The list of 1460 meanings on which the vocabularies are based is called the Loanword Typology meaning list, and it is in turn based on the list of the Intercontinental Dictionary Series." (http://wold.clld.org/)</description>
<size>6206277</size>
</item><item>
<title>Vision meets Robotics: The KITTI Dataset</title>
<category>Dataset</category>
<infohash>8a72adac813a69b15a9764dae9c09ef79a25ad8f</infohash>
<guid>https://academictorrents.com/details/8a72adac813a69b15a9764dae9c09ef79a25ad8f</guid>
<link>https://academictorrents.com/details/8a72adac813a69b15a9764dae9c09ef79a25ad8f</link>
<description>From: http://www.cvlibs.net/datasets/kitti/raw_data.php This is the file : 2011_09_29_drive_0071 (4.1 GB)  [synced+rectified data] This page contains our raw data recordings, sorted by category (see menu above). So far, we included only sequences, for which we either have 3D object labels or which occur in our odometry benchmark training set. The dataset comprises the following information, captured and synchronized at 10 Hz: Raw (unsynced+unrectified) and processed (synced+rectified) grayscale stereo sequences (0.5 Megapixels, stored in png format) Raw (unsynced+unrectified) and processed (synced+rectified) color stereo sequences (0.5 Megapixels, stored in png format) 3D Velodyne point clouds (100k points per frame, stored as binary float matrix) 3D GPS/IMU data (location, speed, acceleration, meta information, stored as text file) Calibration (Camera, Camera-to-GPS/IMU, Camera-to-Velodyne, stored as text file) 3D object tracklet labels (cars, trucks, trams, pedestrians, cyclists, stored as xml file) Here, "unsynced+unrectified" refers to the raw input frames where images are distorted and the frame indices do not correspond, while "synced+rectified" refers to the processed data where images have been rectified and undistorted and where the data frame numbers correspond across all sensor streams. For both settings, files with timestamps are provided. Most people require only the "synced+rectified" version of the files. More detailed information about the sensors, data format and calibration can be found here: Preprint of our IJRR data paper Download the raw data development kit (1 MB) Download the raw dataset download script (1 MB) (thanks to Omid Hosseini for sharing!) Mark Muth has written a QT-based visualizer for point cloud and tracklet sequences. Yani Ioannou (University of Toronto) has put together some tools for working with KITTI raw data using the PCL Christian Herdtweck (MPI Tuebingen) has written a python parser for reading the object label XML files Lee Clement and his group (University of Toronto) have written some python tools for loading and parsing the KITTI raw and odometry datasets Tomáš Krejčí created a simple tool for conversion of raw kitti datasets to ROS bag files: kitti2bag Helen Oleynikova create several tools for working with the KITTI raw dataset using ROS: kitti_to_rosbag Mennatullah Siam has created the KITTI MoSeg dataset with ground truth annotations for moving object detection. Note: We were not able to annotate all sequences and only provide those tracklet annotations that passed the 3rd human validation stage, ie, those that are of very high quality. For sequences for which tracklets are available, you will find the link [tracklets] in the download category.</description>
<size>4337332333</size>
</item><item>
<title>gta_full_dist.tar</title>
<category>Dataset</category>
<infohash>cd01e6d3fc93ad2c48f779cb356715c6794614e1</infohash>
<guid>https://academictorrents.com/details/cd01e6d3fc93ad2c48f779cb356715c6794614e1</guid>
<link>https://academictorrents.com/details/cd01e6d3fc93ad2c48f779cb356715c6794614e1</link>
<description>Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning s application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.</description>
<size>1056718352384</size>
</item><item>
<title>merged_trust_pathway_pancan.hdf5</title>
<category>Dataset</category>
<infohash>a8b1bb215a6ab49d6947f666cc7780a66e1b86fe</infohash>
<guid>https://academictorrents.com/details/a8b1bb215a6ab49d6947f666cc7780a66e1b86fe</guid>
<link>https://academictorrents.com/details/a8b1bb215a6ab49d6947f666cc7780a66e1b86fe</link>
<description/>
<size>1095214724</size>
</item><item>
<title>TCGA_tissue_ppi.hdf5</title>
<category>Dataset</category>
<infohash>4070a45bc7dd69584f33e86ce193a2c903f0776d</infohash>
<guid>https://academictorrents.com/details/4070a45bc7dd69584f33e86ce193a2c903f0776d</guid>
<link>https://academictorrents.com/details/4070a45bc7dd69584f33e86ce193a2c903f0776d</link>
<description/>
<size>1748319632</size>
</item><item>
<title>pancan-tissue-graph.hdf5</title>
<category>Dataset</category>
<infohash>ae2691e4f4f068d32f83797b224eb854b27bd3ee</infohash>
<guid>https://academictorrents.com/details/ae2691e4f4f068d32f83797b224eb854b27bd3ee</guid>
<link>https://academictorrents.com/details/ae2691e4f4f068d32f83797b224eb854b27bd3ee</link>
<description/>
<size>1066392832</size>
</item><item>
<title>kegg.hdf5</title>
<category>Dataset</category>
<infohash>3c8ac6e7ab6fbf962cedb77192177c58b7518b23</infohash>
<guid>https://academictorrents.com/details/3c8ac6e7ab6fbf962cedb77192177c58b7518b23</guid>
<link>https://academictorrents.com/details/3c8ac6e7ab6fbf962cedb77192177c58b7518b23</link>
<description/>
<size>209057632</size>
</item><item>
<title>"Pwned Passwords" Dataset</title>
<category>Dataset</category>
<infohash>53555c69e3799d876159d7290ea60e56b35e36a9</infohash>
<guid>https://academictorrents.com/details/53555c69e3799d876159d7290ea60e56b35e36a9</guid>
<link>https://academictorrents.com/details/53555c69e3799d876159d7290ea60e56b35e36a9</link>
<description>Version 3 with 517M hashes and counts of password usage ordered by most to least prevalent Pwned Passwords are 517,238,891 real world passwords previously exposed in data breaches. This exposure makes them unsuitable for ongoing use as they re at much greater risk of being used to take over other accounts. They re searchable online below as well as being downloadable for use in other online system. The entire set of passwords is downloadable for free below with each password being represented as a SHA-1 hash to protect the original value (some passwords contain personally identifiable information) followed by a count of how many times that password had been seen in the source data breaches. The list may be integrated into other systems and used to verify whether a password has previously appeared in a data breach after which a system may warn the user or even block the password outright.</description>
<size>11101449979</size>
</item><item>
<title>Open Payments Dataset - 2017 Program Year </title>
<category>Dataset</category>
<infohash>638bacf15e6759c8c1a34a560341079f5e727cc3</infohash>
<guid>https://academictorrents.com/details/638bacf15e6759c8c1a34a560341079f5e727cc3</guid>
<link>https://academictorrents.com/details/638bacf15e6759c8c1a34a560341079f5e727cc3</link>
<description>Every year, CMS will update the Open Payments data at least once after its initial publication. The refreshed data will include updates to data disputes and other data corrections made since the initial publication of this data documenting payments or transfers of value to physicians and teaching hospitals, and physician ownership and investment interests. This financial data is submitted by applicable manufacturers and applicable group purchasing organizations (GPOs). #### What data is collected? Applicable manufacturers and GPOs submit data to Open Payments about payments or other transfers of value between applicable manufacturers and GPOs and physicians or teaching hospitals: 1. Paid directly to physicians and teaching hospitals (known as direct payments) 2. Paid indirectly to physicians and teaching hospitals (known as indirect payments) through an intermediary such as a medical specialty society 3. Designated by physicians or teaching hospitals to be paid to another party (known as third party payments) There are three distinct ways for you to review and search the data (and remember, you can view the summary data dashboard for an overview of published data): The Open Payments Final Rule §403.910 provides applicable manufacturers and applicable GPO s the opportunity to request a delay in publication for a period not to exceed four calendar years after the date the payment or other transfer of value was made, or upon the approval, licensure or clearance of the covered drug, device, biological, or medical supply by the FDA.</description>
<size>562184363</size>
</item><item>
<title>Open Payments Dataset - 2016 Program Year </title>
<category>Dataset</category>
<infohash>121e32c9431fbb1083a6c2b82052b3b36f47efa7</infohash>
<guid>https://academictorrents.com/details/121e32c9431fbb1083a6c2b82052b3b36f47efa7</guid>
<link>https://academictorrents.com/details/121e32c9431fbb1083a6c2b82052b3b36f47efa7</link>
<description>Every year, CMS will update the Open Payments data at least once after its initial publication. The refreshed data will include updates to data disputes and other data corrections made since the initial publication of this data documenting payments or transfers of value to physicians and teaching hospitals, and physician ownership and investment interests. This financial data is submitted by applicable manufacturers and applicable group purchasing organizations (GPOs). #### What data is collected? Applicable manufacturers and GPOs submit data to Open Payments about payments or other transfers of value between applicable manufacturers and GPOs and physicians or teaching hospitals: 1. Paid directly to physicians and teaching hospitals (known as direct payments) 2. Paid indirectly to physicians and teaching hospitals (known as indirect payments) through an intermediary such as a medical specialty society 3. Designated by physicians or teaching hospitals to be paid to another party (known as third party payments) There are three distinct ways for you to review and search the data (and remember, you can view the summary data dashboard for an overview of published data): The Open Payments Final Rule §403.910 provides applicable manufacturers and applicable GPO s the opportunity to request a delay in publication for a period not to exceed four calendar years after the date the payment or other transfer of value was made, or upon the approval, licensure or clearance of the covered drug, device, biological, or medical supply by the FDA.</description>
<size>607272313</size>
</item><item>
<title>LiTS – Liver Tumor Segmentation Challenge (LiTS17)</title>
<category>Dataset</category>
<infohash>27772adef6f563a1ecc0ae19a528b956e6c803ce</infohash>
<guid>https://academictorrents.com/details/27772adef6f563a1ecc0ae19a528b956e6c803ce</guid>
<link>https://academictorrents.com/details/27772adef6f563a1ecc0ae19a528b956e6c803ce</link>
<description>The liver is a common site of primary (i.e. originating in the liver like hepatocellular carcinoma, HCC) or secondary (i.e. spreading to the liver like colorectal cancer) tumor development. Due to their heterogeneous and diffusive shape, automatic segmentation of tumor lesions is very challenging. Until now, only interactive methods achieved acceptable results segmenting liver lesions. With our challenge we encourage researchers to develop automatic segmentation algorithms to segment liver lesions in contrast­-enhanced abdominal CT scans. The data and segmentations are provided by various clinical sites around the world. The training data set contains 130 CT scans and the test data set 70 CT scans. The challenge is organised in conjunction with ISBI 2017 and MICCAI 2017. For MICCAI 2017 we added tasks for liver segmentation and tumor burden estimation. ![](https://i.imgur.com/ia2qGlH.png) ![](https://i.imgur.com/eDN20ck.png) Paper reference: https://arxiv.org/abs/1901.04056</description>
<size>16655115138</size>
</item><item>
<title>North America roads GIS data</title>
<category>Dataset</category>
<infohash>0a853fdcc1d28c306d75e29195a5536087f6e2b4</infohash>
<guid>https://academictorrents.com/details/0a853fdcc1d28c306d75e29195a5536087f6e2b4</guid>
<link>https://academictorrents.com/details/0a853fdcc1d28c306d75e29195a5536087f6e2b4</link>
<description/>
<size>8353311760</size>
</item><item>
<title>Corpus of Russian news articles collected from Lenta.Ru</title>
<category>Dataset</category>
<infohash>cfc4ba252fe56176d9db31b0609f0ece6a389b09</infohash>
<guid>https://academictorrents.com/details/cfc4ba252fe56176d9db31b0609f0ece6a389b09</guid>
<link>https://academictorrents.com/details/cfc4ba252fe56176d9db31b0609f0ece6a389b09</link>
<description>This dataset contains 699.746 news articles from popular Russian media Lenta.Ru.</description>
<size>1810139847</size>
</item><item>
<title>LUng Nodule Analysis (LUNA16) All Images</title>
<category>Dataset</category>
<infohash>58b053204337ca75f7c2e699082baeb57aa08578</infohash>
<guid>https://academictorrents.com/details/58b053204337ca75f7c2e699082baeb57aa08578</guid>
<link>https://academictorrents.com/details/58b053204337ca75f7c2e699082baeb57aa08578</link>
<description>| ![](https://i.imgur.com/8Oolu7D.png)      | ![](https://i.imgur.com/5WsoKqU.png)   | |&amp;mdash; |-  | Lung cancer is the leading cause of cancer-related death worldwide. Screening high risk individuals for lung cancer with low-dose CT scans is now being implemented in the United States and other countries are expected to follow soon. In CT lung cancer screening, many millions of CT scans will have to be analyzed, which is an enormous burden for radiologists. Therefore there is a lot of interest to develop computer algorithms to optimize screening. A vital first step in the analysis of lung cancer screening CT scans is the detection of pulmonary nodules, which may or may not represent early stage lung cancer. Many Computer-Aided Detection (CAD) systems have already been proposed for this task. The LUNA16 challenge will focus on a large-scale evaluation of automatic nodule detection algorithms on the LIDC/IDRI data set. The LIDC/IDRI data set is publicly available, including the annotations of nodules by four radiologists. The LUNA16 challenge is therefore a completely open challenge. We have tracks for complete systems for nodule detection, and for systems that use a list of locations of possible nodules. We provide this list to also allow teams to participate with an algorithm that only determines the likelihood for a given location in a CT scan to contain a pulmonary nodule. ### Motivation Lung cancer is the leading cause of cancer-related death worldwide. The National Lung Screening Trial (NLST), a randomized control trial in the U.S. including more than 50,000 high-risk subjects, showed that lung cancer screening using annual low-dose computed tomography (CT) reduces lung cancer mortality by 20% in comparison to annual screening with chest radiography [1]. In 2013, the U.S. Preventive Services Task Force (USPSTF) has given low-dose CT screening a grade B recommendation for high-risk individuals [2] and early 2015, the U.S. Centers for Medicare and Medicaid Services (CMS) has approved CT lung cancer screening for Medicare recipients. As a result of these developments, lung cancer screening programs using low-dose CT are being implemented in the United States and other countries. Computer-aided detection (CAD) of pulmonary nodules could play an important role when screening is implemented on a large scale. Large evaluation studies investigating the performance of different state-of-the-art CAD systems are scarce. Therefore, we organize a novel CAD detection challenge using a large public LIDC-IDRI dataset. The detailed description of the challenge is now available in this article. We believe that this challenge is important for a reliable comparison of CAD algorithms and to encourage rapid development of new algorithms using state-of-the-art computer vision technology. ### Challenge tracks We invite the research community to participate in one or two of the following challenge tracks: 1. Nodule detection (NDET) Using raw CT scans, the goal is to identify locations of possible nodules, and to assign a probability for being a nodule to each location. The pipeline typically consists of two stages: candidate detection and false positive reduction. 2. False positive reduction (FPRED) Given a set of candidate locations, the goal is to assign a probability for being a nodule to each candidate location. Hence, one could see this as a classification task: nodule or not a nodule. Candidate locations will be provided in world coordinates. This set detects 1,162/1,186 nodules. ### Open challenge LUNA16 is a completely open challenge. This means that unlike other challenges, images and reference standard are publicly available. The goal of LUNA16 is to provide an opportunity for participants to test their algorithm on common database with a standardized evaluation protocol. With the spirit of speeding-up scientic progress, the results listed on the website can be used as an indication on how well state-of-the-art CAD algorithms perform. We hope LUNA16 will yield several results that are worthwhile for the CAD research community. We are committed to maintain this site as a public repository of benchmark results for nodule detection on a common database in the spirit of cooperative scientific progress. In return, we ask everyone who uses this site to respect the rules below. ### Rules The following rules apply to those who register a team and download the data: The downloaded data sets or any data derived from these data sets, may not be given or redistributed under any circumstances to persons not belonging to the registered team. All information entered when registering a team, including the name of the contact person, the affiliation (institute, organization or company the team s contact person works for) and the e-mail address must be complete and correct. In other words, anonymous registration is not allowed. If you want to submit anonymous, for example because you want to submit your results to a conference that requires anonymous submission, please contact the organizers. The LUNA16 organizers reserve the right to request a pdf file describing the system to accompany the submitted result. The organizers may refuse to evaluate systems whose description does not meet minimal requirements. Results uploaded to this website will be made publicly available on this site (see the Results Section), and by submitting results, you grant us permission to do so. Obviously, teams maintain full ownership and rights to the method. Teams must notify the maintainers of this site about any publication that is (partly) based on the data on this site, in order for us to maintain a list of publications associated with the LUNA16 study. ### References [1] Aberle D. R., Adams A. M., Berg C. D., Black W. C., Clapp J. D., Fagerstrom R. M., Gareen I. F., Gatsonis C., Marcus P. M., and Sicks J. D. Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med, 365:395–409, 2011. [2] Moyer VA, U.S. Preventive Services Task Force. Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med, 160:330-338 2014. [3] Armato SG, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans. Med Phys, 38:915–931, 2011. ### Organizers Colin Jacobs (Radboud University Medical Center, Nijmegen, The Netherlands) Arnaud Arindra Adiyoso Setio (Radboud University Medical Center, Nijmegen, The Netherlands) Alberto Traverso (Polytechnic University of Turin and Turin Section of INFN, Turin, Italy) Bram van Ginneken (Radboud University Medical Center, Nijmegen, The Netherlands) ![](https://i.imgur.com/8Oolu7D.png) ![](https://i.imgur.com/5WsoKqU.png)</description>
<size>65995402313</size>
</item><item>
<title>A collection of sport activity datasets with an emphasis on powermeter data</title>
<category>Dataset</category>
<infohash>bf76b193960a96a683f9c2afde70acab9d3d757d</infohash>
<guid>https://academictorrents.com/details/bf76b193960a96a683f9c2afde70acab9d3d757d</guid>
<link>https://academictorrents.com/details/bf76b193960a96a683f9c2afde70acab9d3d757d</link>
<description/>
<size>919751720</size>
</item><item>
<title>03FYZ 2017-2018 Techiche di Programmazione (ITA)</title>
<category>Course</category>
<infohash>82a376a0ec3055e12f6318f91bea426fa9826c36</infohash>
<guid>https://academictorrents.com/details/82a376a0ec3055e12f6318f91bea426fa9826c36</guid>
<link>https://academictorrents.com/details/82a376a0ec3055e12f6318f91bea426fa9826c36</link>
<description>Video Lezioni corso di Tecniche di Programmazione, Politecnico di Torino, anno 2018 (in Italian) Video-Lezioni per il corso di Tecniche di Programmazione, tenutosi al Politecnico di Torino nell Anno Accademico 2017/2018. Docenti del corso: Fulvio Corno, Andrea Marcelli, Alberto Monge Roffarello Informazioni sul corso: - pagina ufficiale del corso: http://bit.ly/tecn-progr - materiale didattico: https://github.com/TdP-2018/materiale - esercizi e laboratori: https://github.com/TdP-2018 - temi d esame: https://github.com/TdP-esami Queste video-lezioni sono disponibili anche come playlist su YouTube:</description>
<size>12900129418</size>
</item><item>
<title>01QZP 2017-2018 Ambient Intelligence</title>
<category>Course</category>
<infohash>2e2c44bc980463027405408a7975877a7e37c57a</infohash>
<guid>https://academictorrents.com/details/2e2c44bc980463027405408a7975877a7e37c57a</guid>
<link>https://academictorrents.com/details/2e2c44bc980463027405408a7975877a7e37c57a</link>
<description>Lectures of Ambient Intelligence at Politecnico di Torino, in 2018. Topics: * Introduction to Ambient Intelligence: definitions and available approaches for smart homes, smart buildings, etc. Overview of application areas (home, building, city, traffic, etc.) and types of applications (monitoring, comfort, anomaly detection, ambient assisted living, control and automation, etc.) * Requirements and design methodology for AmI. Design, analysis and specification of requirements and functionalities related to user interacting with AmI settings. * Practical programming of AmI systems: the Python language, the Raspberry Pi computer, Web protocols and languages (e.g., HTTP and REST), web-based APIs, and collaboration tools (git, GitHub).</description>
<size>4551871695</size>
</item><item>
<title>cataracts-2018-train</title>
<category>Dataset</category>
<infohash>d73aef93c45f583a6c7abf508e750da6b7636bff</infohash>
<guid>https://academictorrents.com/details/d73aef93c45f583a6c7abf508e750da6b7636bff</guid>
<link>https://academictorrents.com/details/d73aef93c45f583a6c7abf508e750da6b7636bff</link>
<description>Surgical tool detection is attracting increasing attention from the medical image processing community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and real-time decision support. In 2017, we organized a challenge on surgical tool detection in cataract surgery videos. With 14 participating teams, that first edition of CATARACTS was a success. Therefore, we decided to repeat this experience in 2018 with new data and new technical challenges. In particular, the 2018 edition provides two synchronized video streams per surgery: one showing the patient s eye (like in the 2017 edition), the other one showing the surgical tray. By knowing which tools exit or enter the surgical tray, we know which tools are likely being used by the surgeon and which tools surely are not. This second edition of CATARACTS is organized as a sub-challenge of the MICCAI 2018 EndoVis challenge. It should be noted that CATARACTS does not rely on endoscopic videos, but rather on microscopic videos. However, these two modalities share many similarities that justify a joint event.</description>
<size>382213803718</size>
</item><item>
<title>The World Riddles (2018 pop sci film, videobook) by Anand Madhu</title>
<category>Course</category>
<infohash>673e960ed7b021717bffe146cbbb70e5d02a9a35</infohash>
<guid>https://academictorrents.com/details/673e960ed7b021717bffe146cbbb70e5d02a9a35</guid>
<link>https://academictorrents.com/details/673e960ed7b021717bffe146cbbb70e5d02a9a35</link>
<description>Category: Science, Philosophy, Philosophy of Science An interdisciplinary pop sci film/video-thesis which explores the nature of man by starting with cultural references then moving to psychology and neuroscience. After this, comparative mythology, blood group dynamics etc. are studied in a similar way. Sociopolitical recommendations to end with.</description>
<size>8649498930</size>
</item><item>
<title>Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) Image Data</title>
<category>Dataset</category>
<infohash>d01d7568512efe5a9ad0525af853cab9ff921e51</infohash>
<guid>https://academictorrents.com/details/d01d7568512efe5a9ad0525af853cab9ff921e51</guid>
<link>https://academictorrents.com/details/d01d7568512efe5a9ad0525af853cab9ff921e51</link>
<description>This data collection consists of images and associated data acquired from the APOLLO Network. The Applied Proteogenomics OrganizationaL Learning and Outcomes (APOLLO) network is a collaboration between NCI, the Department of Defense (DoD), and the Department of Veterans Affairs (VA) to incorporate proteogenomics into patient care as a way of looking beyond the genome, to the activity and expression of the proteins that the genome encodes. The emerging field of proteogenomics aims to better predict how patients will respond to therapy by screening their tumors for both genetic abnormalities and protein information, an approach that has been made possible in recent years due to advances in proteomic technology. | Detailed Description       |     | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;-| | Image Size (GB)            | 2.6 | | Modalities                 |   PET, CT, MRI, MRA + others   | | Number of Images           |    6203 | | Number of Patients         |  7   | | Number of Series           |   43    | | Number of Studies          |  36   | # Citation request Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. ![](https://i.imgur.com/TAshQmM.png)</description>
<size>2640815762</size>
</item><item>
<title>MRI Lesion Segmentation in Multiple Sclerosis Database</title>
<category>Dataset</category>
<infohash>e08155e5022d688fea00319bd2ead4f0f703f5bb</infohash>
<guid>https://academictorrents.com/details/e08155e5022d688fea00319bd2ead4f0f703f5bb</guid>
<link>https://academictorrents.com/details/e08155e5022d688fea00319bd2ead4f0f703f5bb</link>
<description>MRI MS DB Description: In the IMT-Segmentation folder there are 38 folders representing data for each patient 38patients). In each patient folder we have: 1) MRI TIFF Images from first and second examination (0 months, 6-12 months) 2) Lesion segmentations (*.plq files). The delineation/s can be loaded into matlab i.e load(file.plq,  -.mat ); Then points can be drawn on the image. load( IM_00031_1.plq , -mat );</description>
<size>193085367</size>
</item><item>
<title>The PDS Universal Planetary Coordinates (UPC) Database,  Mars DB (Single File)</title>
<category>Dataset</category>
<infohash>28e4d28f5c7873118147b64625e85ce34ea50184</infohash>
<guid>https://academictorrents.com/details/28e4d28f5c7873118147b64625e85ce34ea50184</guid>
<link>https://academictorrents.com/details/28e4d28f5c7873118147b64625e85ce34ea50184</link>
<description>What is the Universal Planetary Coordinates (UPC)? The Universal Planetary Coordinates (or UPC) is a database of many of the level 1 imaging data products archived in the PDS Imaging Node. The UPC includes the camera statistics, URLs for thumbnail and browse images, and the GIS footprint for each image. These data products and meta data are calculated using ISIS3. For this reason, only data products which have an ISIS3 camera model can be included in the UPC.</description>
<size>4732773100</size>
</item><item>
<title>Statistical Machine Learning CMU Spring 2016</title>
<category>Course</category>
<infohash>07f1555918ed051809f0075fedc0cd469a194c93</infohash>
<guid>https://academictorrents.com/details/07f1555918ed051809f0075fedc0cd469a194c93</guid>
<link>https://academictorrents.com/details/07f1555918ed051809f0075fedc0cd469a194c93</link>
<description>Statistical Machine Learning is a second graduate level course in advanced machine learning, assuming students have taken Machine Learning (10-715) and Intermediate Statistics (36-705). The course covers methodology and theoretical foundations. Function Spaces Concentration of Measure Linear Regression Nonparametric Regression Linear Classification Nonparametric Classification Minimax Theory Density Estimation Nonparametric Bayes Clustering Graphical Models Dimension Reduction Random Matrix Theory</description>
<size>28189820070</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2007 (VOC2007) VOCtest_06-Nov-2007.tar</title>
<category>Dataset</category>
<infohash>7b387c8154f9cc3f106e5bb4932fd7d8c7728129</infohash>
<guid>https://academictorrents.com/details/7b387c8154f9cc3f106e5bb4932fd7d8c7728129</guid>
<link>https://academictorrents.com/details/7b387c8154f9cc3f106e5bb4932fd7d8c7728129</link>
<description>==Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor There will be two main competitions, and two smaller scale "taster" competitions.</description>
<size>451020800</size>
</item><item>
<title>The PASCAL Visual Object Classes Challenge 2012 (VOC2012) VOCtestnoimgs_06-Nov-2007.tar</title>
<category>Dataset</category>
<infohash>20f2372d6cbf09ab71817b7b070e20d97473b373</infohash>
<guid>https://academictorrents.com/details/20f2372d6cbf09ab71817b7b070e20d97473b373</guid>
<link>https://academictorrents.com/details/20f2372d6cbf09ab71817b7b070e20d97473b373</link>
<description/>
<size>12492800</size>
</item><item>
<title>CrackStation's Password Cracking Dictionary</title>
<category>Dataset</category>
<infohash>fd62cc1d79f595cbe1de6356fb13c2165994e469</infohash>
<guid>https://academictorrents.com/details/fd62cc1d79f595cbe1de6356fb13c2165994e469</guid>
<link>https://academictorrents.com/details/fd62cc1d79f595cbe1de6356fb13c2165994e469</link>
<description>The list contains every wordlist, dictionary, and password database leak that I could find on the internet (and I spent a LOT of time looking). It also contains every word in the Wikipedia databases (pages-articles, retrieved 2010, all languages) as well as lots of books from Project Gutenberg. It also includes the passwords from some low-profile database breaches that were being sold in the underground years ago. The format of the list is a standard text file sorted in non-case-sensitive alphabetical order. Lines are separated with a newline "\n" character. You can test the list without downloading it by giving SHA256 hashes to the free hash cracker or to @PlzCrack on twitter. Here s a tool for computing hashes easily. Here are the results of cracking LinkedIn s and eHarmony s password hash leaks with the list. The list is responsible for cracking about 30% of all hashes given to CrackStation s free hash cracker, but that figure should be taken with a grain of salt because some people try hashes of really weak passwords just to test the service, and others try to crack their hashes with other online hash crackers before finding CrackStation. Using the list, we were able to crack 49.98% of one customer s set of 373,000 human password hashes to motivate their move to a better salting scheme.</description>
<size>4500756826</size>
</item><item>
<title>MIT OCW 6.00 -  Introduction to Computer Science and Programming. Fall 2008</title>
<category>Course</category>
<infohash>f7c9a9db5d0d9a1e0f2f383ec629fefffa475ae5</infohash>
<guid>https://academictorrents.com/details/f7c9a9db5d0d9a1e0f2f383ec629fefffa475ae5</guid>
<link>https://academictorrents.com/details/f7c9a9db5d0d9a1e0f2f383ec629fefffa475ae5</link>
<description>This subject is aimed at students with little or no programming experience. It aims to provide students with an understanding of the role computation can play in solving problems. It also aims to help students, regardless of their major, to feel justifiably confident of their ability to write small programs that allow them to accomplish useful goals. The class will use the Python? programming language.</description>
<size>5336847948</size>
</item><item>
<title>Internet History, Technology, and Security by Charles Severance</title>
<category>Course</category>
<infohash>d666bf4b83066bcf7401e9c155fcd1b1c01cfb11</infohash>
<guid>https://academictorrents.com/details/d666bf4b83066bcf7401e9c155fcd1b1c01cfb11</guid>
<link>https://academictorrents.com/details/d666bf4b83066bcf7401e9c155fcd1b1c01cfb11</link>
<description>(Including videos, presentation files and subtitles for the videos [ENG only]) ## About the Course The impact of technology and networks on our lives, culture, and society continues to increase. The very fact that you can take this course from anywhere in the world requires a technological infrastructure that was designed, engineered, and built over the past sixty years. To function in an information-centric world, we need to understand the workings of network technology. This course will open up the Internet and show you how it was created, who created it and how it works. Along the way we will meet many of the innovators who developed the Internet and Web technologies that we use today. ## What You Will Learn After this course you will not take the Internet and Web for granted. You will be better informed about important technological issues currently facing society. You will realize that the Internet and Web are spaces for innovation and you will get a better understanding of how you might fit into that innovation. If you get excited about the material in this course, it is a great lead-in to taking a course in Web design, Web development, programming, or even network administration. At a minimum, you will be a much wiser network citizen. ## Course Syllabus * Week 1: Introduction to the Course and The Dawn of Electronic Computing (1940-1960) * Week 2: The First Internet (1960-1990) * Week 3: The World Wide Web (1990-1995) * Week 4: Commercialization and Growth (1995-2000) * Week 5: Internets and Packets * Week 6: Transports and Security * Week 7: Networked Applications * Week 8: Security - Protecting Information * Week 9: Security - Establishing Identity * Final Exam ## Recommended Background This course has no prerequisites and there will be no programming. Literally anyone can and everyone should take this course.</description>
<size>2349673820</size>
</item><item>
<title>MICCAI 2013 Challenge on Multimodal Brain Tumor Segmentation (BraTS2013)	</title>
<category>Dataset</category>
<infohash>39c5a52bda7b5b701cecfc454a79d385868d4f3d</infohash>
<guid>https://academictorrents.com/details/39c5a52bda7b5b701cecfc454a79d385868d4f3d</guid>
<link>https://academictorrents.com/details/39c5a52bda7b5b701cecfc454a79d385868d4f3d</link>
<description>A publicly available set of training data can be downloaded for algorithmic tweaking and tuning from the Virtual Skeleton Database. The training data consists of multi-contrast MR scans of 30 glioma patients (both low-grade and high-grade, and both with and without resection) along with expert annotations for "active tumor" and "edema". For each patient, T1, T2, FLAIR, and post-Gadolinium T1 MR images are available. All volumes were linearly co-registered to the T1 contrast image, skull stripped, and interpolated to 1mm isotropic resolution. No attempt was made to put the individual patients in a common reference space. The MR scans, as well as the corresponding reference segmentations, are distributed in the ITK- and VTK-compatible MetaIO file format. Patients with high- and low-grade gliomas have file names "BRATS_HG" and "BRATS_LG", respectively. All images are stored as signed 16-bit integers, but only positive values are used. The manual segmentations (file names ending in "_truth.mha") have only five intensity levels: 1 for Non-brain, non-tumor, necrosis, cyst, hemorrhage, 2 for Surrounding edema, 3 for Non-enhancing tumor, 4 for enhancing tumor core and 0 for everything else. Detailed technical documentation on the used MetaIO file format is available here. The training data also contains simulated images for 25 high-grade and 25 low-grade glioma subjects. These simulated images closely follow the conventions used for the real data, except that their file names start with "SimBRATS"; they are all in BrainWeb space; and their MR scans and ground truth segmentations are stored using unsigned 16 bit and unsigned 8 bit integers, respectively. Details on the simulation method are available here. Testing data A set of independent testing data will be provided on the day of the challenge itself. This testing data will be similar to the training data, except that the reference segmentation will not be made publicly available. ![](https://i.imgur.com/aSB7Y0r.png)</description>
<size>19706785133</size>
</item><item>
<title>Caudate Segmentation Evaluation 2007 (CAUSE07)</title>
<category>Dataset</category>
<infohash>d6c066ef308cc704c8898d5f87cf55e986475fb5</infohash>
<guid>https://academictorrents.com/details/d6c066ef308cc704c8898d5f87cf55e986475fb5</guid>
<link>https://academictorrents.com/details/d6c066ef308cc704c8898d5f87cf55e986475fb5</link>
<description>CAUSE07 is a competition that was held as part of the workshop 3D Segmentation in the Clinic: A Grand Challenge, on October 26, 2007 in conjunction with MICCAI 2007. The goal of this competition was to compare different algorithms to segment the caudate nucleaus from brain MRI scans. Through this website, the competition continues. You can browse the results of various systems, and read papers and descriptions about the methods that have been applied to the CAUSE07 data set. If you want to join the competition, you can register a team, download training and test data, and submit the results of your own algorithms, provided you adhere to and agree with the rules. More information is available in the answers to frequently asked questions. ![](https://i.imgur.com/Yu4YFY1.gif) ## Citation Request "3D Segmentation in the Clinic: A Grand Challenge", B. van Ginneken, T. Heimann, and M. Styner. In: T. Heimann, M. Styner, B. van Ginneken (Eds.): 3D Segmentation in the Clinic: A Grand Challenge, pp. 7-15, 2007.</description>
<size>1887197764</size>
</item><item>
<title>02CIX 2017-2018 Sistemi Informativi Aziendali (ITA)</title>
<category>Course</category>
<infohash>d1b42ce7f9b00f63fd90bd8468ffb68c7ab1423c</infohash>
<guid>https://academictorrents.com/details/d1b42ce7f9b00f63fd90bd8468ffb68c7ab1423c</guid>
<link>https://academictorrents.com/details/d1b42ce7f9b00f63fd90bd8468ffb68c7ab1423c</link>
<description>Lectures of "Sistemi Informativi Aziendali" at Politecnico di Torino, Italy, year 2017/2018 (in Italian) Argomenti del corso: Introduzione al corso Definizione di Sistema Informativo Modellazione concettuale Modellazione di processo Ingegneria dei requisiti Casi d uso Web Information Systems Progettazione dell Interfaccia Utente Sistemi Informativi Manageriali Materiale didattico disponibile alla pagina: http://bit.ly/sistinfo</description>
<size>7349501486</size>
</item><item>
<title>Malignant lymphoma classification</title>
<category>Dataset</category>
<infohash>3cde17e7e4d9886513630c1005ba20b8d37c333a</infohash>
<guid>https://academictorrents.com/details/3cde17e7e4d9886513630c1005ba20b8d37c333a</guid>
<link>https://academictorrents.com/details/3cde17e7e4d9886513630c1005ba20b8d37c333a</link>
<description>Malignant lymphoma is a cancer affecting lymph nodes. Three types of malignant lymphoma are represented in the set: CLL (chronic lymphocytic leukemia), FL (follicular lymphoma), and MCL (mantle cell lymphoma). The ability to distinguish classes of lymphoma from biopsies sectioned and stained with Hematoxylin/Eosin (H+E) would allow for more consistent and less demanding diagnosis of this disease. Only the most expert pathologists specializing in these types of lymphomas are able to consistently and accurately classify these three lymphoma types from H+E-stained biopsies. The standard practice is to use class-specific probes in order to distinguish these classes reliably. The dataset presented is a collection of samples prepared by different pathologists at different sites. There is a large degree of staining variation that one would normally expect from such samples. A randomly selected image from each class: ![](https://i.imgur.com/qoo1AAM.png)</description>
<size>1441583313</size>
</item><item>
<title>Breast Cancer Cell Segmentation</title>
<category>Dataset</category>
<infohash>b79869ca12787166de88311ca1f28e3ebec12dec</infohash>
<guid>https://academictorrents.com/details/b79869ca12787166de88311ca1f28e3ebec12dec</guid>
<link>https://academictorrents.com/details/b79869ca12787166de88311ca1f28e3ebec12dec</link>
<description>There are about 58 H&amp;E stained histopathology images used in breast cancer cell detection with associated ground truth data available. Routine histology uses the stain combination of hematoxylin and eosin, commonly referred to as H&amp;E. These images are stained since most cells are essentially transparent, with little or no intrinsic pigment. Certain special stains, which bind selectively to particular components, are be used to identify biological structures such as cells. In those images, the challenging problem is cell segmentation for subsequent classification in benign and malignant cells. The ground truth have been obtained for one image containing benign cells. | Image: |Ground Truth: | |&amp;mdash;-|&amp;mdash;-| | ![](https://i.imgur.com/haa5X8O.png) | ![](https://i.imgur.com/gqBikTa.png) | All images: ![](https://i.imgur.com/QM22bG2.png)</description>
<size>159955958</size>
</item><item>
<title>Introduction to Computer Science [CS50x] [Harvard] [2018]</title>
<category>Course</category>
<infohash>52da574b6412862e199abeaea63e51bf8cea2140</infohash>
<guid>https://academictorrents.com/details/52da574b6412862e199abeaea63e51bf8cea2140</guid>
<link>https://academictorrents.com/details/52da574b6412862e199abeaea63e51bf8cea2140</link>
<description>"Demanding, but definitely doable. Social, but educational. A focused topic, but broadly applicable skills. CS50 is the quintessential Harvard (and Yale!) course. Hello, world! This is CS50 (aka CS50x through edX), Harvard University s introduction to the intellectual enterprises of computer science and the art of programming. Introduction to the intellectual enterprises of computer science and the art of programming. This course teaches students how to think algorithmically and solve problems efficiently. Topics include abstraction, algorithms, data structures, encapsulation, resource management, security, software engineering, and web development. Languages include C, Python, SQL, and JavaScript plus CSS and HTML. Problem sets inspired by real-world domains of biology, cryptography, finance, forensics, and gaming. Designed for majors and non-majors alike, with or without prior programming experience.</description>
<size>9605502232</size>
</item><item>
<title>GANGogh training data set</title>
<category>Dataset</category>
<infohash>1d154cde2fab9ec8039becd03d9bb877614d351b</infohash>
<guid>https://academictorrents.com/details/1d154cde2fab9ec8039becd03d9bb877614d351b</guid>
<link>https://academictorrents.com/details/1d154cde2fab9ec8039becd03d9bb877614d351b</link>
<description>This is a training data set that can be used for the GANGogh machine learning model. Once downloaded, modify the styles variable in tflib/wikiartGenre.py as follows:</description>
<size>37153345191</size>
</item><item>
<title>Electron Microscopy (CA1 hippocampus) Dataset</title>
<category>Dataset</category>
<infohash>3ada3ae6ec71097e63d897cf878051bba3eaba25</infohash>
<guid>https://academictorrents.com/details/3ada3ae6ec71097e63d897cf878051bba3eaba25</guid>
<link>https://academictorrents.com/details/3ada3ae6ec71097e63d897cf878051bba3eaba25</link>
<description>The dataset available for download on this webpage represents a 5x5x5µm section taken from the CA1 hippocampus region of the brain, corresponding to a 1065x2048x1536 volume. The resolution of each voxel is approximately 5x5x5nm. The data is provided as multipage TIF files that can be loaded in Fiji. ![](https://i.imgur.com/rTCKgHn.png) ![](https://i.imgur.com/DkDkaMH.gif) We annotated mitochondria in two sub-volumes. Each sub-volume consists of the first 165 slices of the 1065x2048x1536 image stack. The volume used for training our algorithm in the publications mentionned at the bottom of this page is the top part while the bottom part was used for testing. Although our line of research was primarily motivated by the need to accurately segment mitochondria and synapses, other structures are of interest for neuroscientists such as vesicles or cell boundaries. This dataset was acquired by Graham Knott and Marco Cantoni at EPFL. It is made publicly available in the hope of encouraging similar sharing of useful data amongst researchers and also accelerating neuroscientific research. For further information, please visit http://cvlab.epfl.ch/research/medical/em/mitochondria.     total 3.7G 124M testing_groundtruth.tif 124M testing.tif 124M training_groundtruth.tif 124M training.tif 3.2G volumedata.tif     ### References A. Lucchi Y. Li and P. Fua, Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets, Conference on Computer Vision and Pattern Recognition, 2013. A. Lucchi, K.Smith, R. Achanta, G. Knott, P. Fua, Supervoxel-Based Segmentation of Mitochondria in EM Image Stacks with Learned Shape Features, IEEE Transactions on Medical Imaging, Vol. 30, Nr. 11, October 2011.</description>
<size>3873351785</size>
</item><item>
<title>Animals with Attributes 2 (AwA2) dataset</title>
<category>Dataset</category>
<infohash>1490aec815141cdb50a32b81ef78b1eaf6b38b03</infohash>
<guid>https://academictorrents.com/details/1490aec815141cdb50a32b81ef78b1eaf6b38b03</guid>
<link>https://academictorrents.com/details/1490aec815141cdb50a32b81ef78b1eaf6b38b03</link>
<description>This dataset provides a platform to benchmark transfer-learning algorithms, in particular attribute base classification and zero-shot learning [1]. It can act as a drop-in replacement to the original Animals with Attributes (AwA) dataset [2,3], as it has the same class structure and almost the same characteristics. It consists of 37322 images of 50 animals classes with pre-extracted feature representations for each image. The classes are aligned with Osherson s classical class/attribute matrix [3,4], thereby providing 85 numeric attribute values for each class. Using the shared attributes, it is possible to transfer information between different classes. The image data was collected from public sources, such as Flickr, in 2016. In the process we made sure to only include images that are licensed for free use and redistribution, please see the archive for the individual license files. ![](https://cvml.ist.ac.at/AwA2/awa2_banner.jpg) ### Publications Please cite the following paper when using the dataset: [1] Y. Xian, C. H. Lampert, B. Schiele, Z. Akata. "Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly" arXiv:1707.00600 [cs.CV] Attribute based classification and the original Animals with Attributes (AwA) data is described in: [2] C. H. Lampert, H. Nickisch, and S. Harmeling. "Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer". In CVPR, 2009 (pdf) [3] C. H. Lampert, H. Nickisch, and S. Harmeling. "Attribute-Based Classification for Zero-Shot Visual Object Categorization". IEEE T-PAMI, 2013 (pdf) The class/attribute matrix was originally created by: [4] D. N. Osherson, J. Stern, O. Wilkie, M. Stob, and E. E. Smith. "Default probability". Cognitive Science, 15(2), 1991. [5] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. "Learning systems of concepts with an infinite relational model". In AAAI, 2006.</description>
<size>13923921135</size>
</item><item>
<title>UrbanMapper 3D (Digital Surface Model and Digital Terrain Model) Dataset</title>
<category>Dataset</category>
<infohash>4ccd3743861d827ac80f0d2b234d7fcfdad2a31d</infohash>
<guid>https://academictorrents.com/details/4ccd3743861d827ac80f0d2b234d7fcfdad2a31d</guid>
<link>https://academictorrents.com/details/4ccd3743861d827ac80f0d2b234d7fcfdad2a31d</link>
<description>Competitors will receive an orthorectified color image, Digital Surface Model (DSM), and Digital Terrain Model (DTM) for each geographic area of interest (AOI). The DSM indicates the height of the earth, with objects such as buildings and trees included. The DTM indicates only the height of the ground. Both should be expected to include some errors, and errors may be expected to be similar in the provisional and sequestered data sets. The difference in the DSM and DTM indicates height of objects above ground. All input files provided are raster GeoTIFF images. Ground truth building labels will also be provided for a subset of the data to be used for training ![](https://i.imgur.com/fnAqq30.png) ![](https://i.imgur.com/vMOXxGr.png)</description>
<size>6618441904</size>
</item><item>
<title>UC Merced Land Use Dataset</title>
<category>Dataset</category>
<infohash>e9ac5edf285a43309e57e1289e8816a4e78a937c</infohash>
<guid>https://academictorrents.com/details/e9ac5edf285a43309e57e1289e8816a4e78a937c</guid>
<link>https://academictorrents.com/details/e9ac5edf285a43309e57e1289e8816a4e78a937c</link>
<description>This is a 21 class land use image dataset meant for research purposes. There are 100 images for each of the following classes:     agricultural airplane baseballdiamond beach buildings chaparral denseresidential forest freeway golfcourse harbor intersection mediumresidential mobilehomepark overpass parkinglot river runway sparseresidential storagetanks tenniscourt     Each image measures 256x256 pixels. ![](https://i.imgur.com/dT8q6Qi.png) The images were manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country. The pixel resolution of this public domain imagery is 1 foot. Please cite the following paper when publishing results that use this dataset: Yi Yang and Shawn Newsam, "Bag-Of-Visual-Words and Spatial Extensions for Land-Use Classification," ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM GIS), 2010. Shawn D. Newsam Assistant Professor and Founding Faculty Electrical Engineering &amp; Computer Science University of California, Merced Email: snewsam@ucmerced.edu Web: http://faculty.ucmerced.edu/snewsam This material is based upon work supported by the National Science Foundation under Grant No. 0917069.</description>
<size>332468434</size>
</item><item>
<title>Functional Map of the World - full - trainval v1.0.0</title>
<category>Dataset</category>
<infohash>22955ce4dbc3c73c254ba6196363667c0352d438</infohash>
<guid>https://academictorrents.com/details/22955ce4dbc3c73c254ba6196363667c0352d438</guid>
<link>https://academictorrents.com/details/22955ce4dbc3c73c254ba6196363667c0352d438</link>
<description>Input Files #### Satellite images Satellite images are available in a variety of formats: &lt;image_id&gt;_&lt;T&gt;_ms.tif is a 8-band multispectral TIFF image, where &lt;image_id&gt; is the unique identifier of the scene, &lt;T&gt; is an integer, representing time. Different T values for the same image_id mean snapshots of the same scene made at different points in time. (T values are not related to absolute time, their order may not correspond to temporal order.) &lt;image_id&gt;_&lt;T&gt;_rgb.tif is 3-band pan-sharpened version of the above, in TIFF format. &lt;image_id&gt;_&lt;T&gt;_msrgb.jpg corresponds to format #1 above, converted to a 3-band, JPEG-compressed RGB image. &lt;image_id&gt;_&lt;T&gt;_rgb.jpg corresponds to format #2 above, converted to a 3-band, JPEG-compressed RGB image. You may choose any of the above formats (or more of them) to work with, the scene content is the same but the number of spectral bands, the image resolution and the level of image compression are different. #### Image metadata and ground truth bounding boxes Metadata on each of the image files is available in JSON files, having the same file name but the .tif or .jpg extension is replaced by .json. The most important pieces of metadata are the following: *gsd* : ground sample distance, the physical size of one image pixel, in meters. utm, country code : the approximate geolocation of the object. *timestamp* : the time when the image was taken, in UTC. *bounding_boxes* : defines the category label, ID and location (in image space) of rectangles that you must use as annotated training data and as target for your predictions. The  box  field within a bounding_box object contains 4 integers which define the x and y coordinates of the top left corner of the rectangle, and the width and height of the rectangle, in this order. #### Differences between training and testing data The file structure and naming conventions of training and testing data are different. Also the testing data has been altered in several ways to remove ground truth information and increase the difficulty of the challenge. Training images contain only one bounding box. Testing images may contain more than one. The &lt;image_id&gt; of a training image contain the category label assigned to the bounding box present in the image. The &lt;image_id&gt; of a testing image is a random numeric value. The training dataset is organized into folders corresponding to category labels. The testing image file structure is one level shallower. Category labels have been removed from the testing metadata. In the testing dataset a small amount of uniform noise has been added to several metadata parameters to reduce their effective precision. The minutes and seconds fields of timestamps are set to random values in the range [0 - 59]. Additional bounding boxes have been added to several testing images. These bounding boxes include content that does not fall into any category in the challenge. If the bounding boxes were generated by some object proposal algorithm then these additional boxes would represent false detections. Images and bounding boxes within images have been randomized and assigned numeric image_ids. A note on training and validation data. The training dataset contains data in two folders: train and val. The content of these two folders are similar, they were created by randomly assigning the whole training dataset into two subsets. You can use both subsets as training data.</description>
<size>2694694895961</size>
</item><item>
<title>Functional Map of the World - full - test v1.0.0</title>
<category>Dataset</category>
<infohash>8da568ecd1f879c0212185c7190dc8bed0753d94</infohash>
<guid>https://academictorrents.com/details/8da568ecd1f879c0212185c7190dc8bed0753d94</guid>
<link>https://academictorrents.com/details/8da568ecd1f879c0212185c7190dc8bed0753d94</link>
<description>Input Files #### Satellite images Satellite images are available in a variety of formats: &lt;image_id&gt;_&lt;T&gt;_ms.tif is a 8-band multispectral TIFF image, where &lt;image_id&gt; is the unique identifier of the scene, &lt;T&gt; is an integer, representing time. Different T values for the same image_id mean snapshots of the same scene made at different points in time. (T values are not related to absolute time, their order may not correspond to temporal order.) &lt;image_id&gt;_&lt;T&gt;_rgb.tif is 3-band pan-sharpened version of the above, in TIFF format. &lt;image_id&gt;_&lt;T&gt;_msrgb.jpg corresponds to format #1 above, converted to a 3-band, JPEG-compressed RGB image. &lt;image_id&gt;_&lt;T&gt;_rgb.jpg corresponds to format #2 above, converted to a 3-band, JPEG-compressed RGB image. You may choose any of the above formats (or more of them) to work with, the scene content is the same but the number of spectral bands, the image resolution and the level of image compression are different. #### Image metadata and ground truth bounding boxes Metadata on each of the image files is available in JSON files, having the same file name but the .tif or .jpg extension is replaced by .json. The most important pieces of metadata are the following: *gsd* : ground sample distance, the physical size of one image pixel, in meters. utm, country code : the approximate geolocation of the object. *timestamp* : the time when the image was taken, in UTC. *bounding_boxes* : defines the category label, ID and location (in image space) of rectangles that you must use as annotated training data and as target for your predictions. The  box  field within a bounding_box object contains 4 integers which define the x and y coordinates of the top left corner of the rectangle, and the width and height of the rectangle, in this order. #### Differences between training and testing data The file structure and naming conventions of training and testing data are different. Also the testing data has been altered in several ways to remove ground truth information and increase the difficulty of the challenge. Training images contain only one bounding box. Testing images may contain more than one. The &lt;image_id&gt; of a training image contain the category label assigned to the bounding box present in the image. The &lt;image_id&gt; of a testing image is a random numeric value. The training dataset is organized into folders corresponding to category labels. The testing image file structure is one level shallower. Category labels have been removed from the testing metadata. In the testing dataset a small amount of uniform noise has been added to several metadata parameters to reduce their effective precision. The minutes and seconds fields of timestamps are set to random values in the range [0 - 59]. Additional bounding boxes have been added to several testing images. These bounding boxes include content that does not fall into any category in the challenge. If the bounding boxes were generated by some object proposal algorithm then these additional boxes would represent false detections. Images and bounding boxes within images have been randomized and assigned numeric image_ids. A note on training and validation data. The training dataset contains data in two folders: train and val. The content of these two folders are similar, they were created by randomly assigning the whole training dataset into two subsets. You can use both subsets as training data.</description>
<size>352390100532</size>
</item><item>
<title>NIH Chest X-ray Dataset of 14 Common Thorax Disease Categories</title>
<category>Dataset</category>
<infohash>557481faacd824c83fbf57dcf7b6da9383b3235a</infohash>
<guid>https://academictorrents.com/details/557481faacd824c83fbf57dcf7b6da9383b3235a</guid>
<link>https://academictorrents.com/details/557481faacd824c83fbf57dcf7b6da9383b3235a</link>
<description>![](https://i.imgur.com/1InHgLs.png) (1, Atelectasis; 2, Cardiomegaly; 3, Effusion; 4, Infiltration; 5, Mass; 6, Nodule; 7, Pneumonia; 8, Pneumothorax; 9, Consolidation; 10, Edema; 11, Emphysema; 12, Fibrosis; 13, Pleural_Thickening; 14 Hernia) ### Background &amp; Motivation: Chest X-ray exam is one of the most frequent and cost-effective medical imaging examination. However clinical diagnosis of chest X-ray can be challenging, and sometimes believed to be harder than diagnosis via chest CT imaging. Even some promising work have been reported in the past, and especially in recent deep learning work on Tuberculosis (TB) classification. To achieve clinically relevant computer-aided detection and diagnosis (CAD) in real world medical sites on all data settings of chest X-rays is still very difficult, if not impossible when only several thousands of images are employed for study. This is evident from [2] where the performance deep neural networks for thorax disease recognition is severely limited by the availability of only 4143 frontal view images [3] (Openi is the previous largest publicly available chest X-ray dataset to date). In this database, we provide an enhanced version (with 6 more disease categories and more images as well) of the dataset used in the recent work [1] which is approximately 27 times of the number of frontal chest x-ray images in [3]. Our dataset is extracted from the clinical PACS database at National Institutes of Health Clinical Center and consists of ~60% of all frontal chest x-rays in the hospital. Therefore we expect this dataset is significantly more representative to the real patient population distributions and realistic clinical diagnosis challenges, than any previous chest x-ray datasets. Of course, the size of our dataset, in terms of the total numbers of images and thorax disease frequencies, would better facilitate deep neural network training [2]. Refer to [1] on the details of how the dataset is extracted and image labels are mined through natural language processing (NLP). ### Details: ChestX-ray dataset comprises 112,120 frontal-view X-ray images of 30,805 unique patients with the text-mined fourteen disease image labels (where each image can have multi-labels), mined from the associated radiological reports using natural language processing. Fourteen common thoracic pathologies include Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural_thickening, Cardiomegaly, Nodule, Mass and Hernia, which is an extension of the 8 common disease patterns listed in our CVPR2017 paper. Note that original radiology reports (associated with these chest x-ray studies) are not meant to be publicly shared for many reasons. The text-mined disease labels are expected to have accuracy &gt;90%.Please find more details and benchmark performance of trained models based on 14 disease labels in our arxiv paper: https://arxiv.org/abs/1705.02315 ### Contents: 1. 112,120 frontal-view chest X-ray PNG images in 1024*1024 resolution (under images folder) 2. Meta data for all images (Data_Entry_2017.csv): Image Index, Finding Labels, Follow-up #, Patient ID, Patient Age, Patient Gender, View Position, Original Image Size and Original Image Pixel Spacing. 3. Bounding boxes for ~1000 images (BBox_List_2017.csv):Image Index, Finding Label, Bbox[x, y, w, h]. [x y] are coordinates of each box s topleft corner. [w h] represent the width and height of each box. If you find the dataset useful for your research projects, please cite our CVPR 2017 paper:Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, MohammadhadiBagheri, Ronald M. Summers.ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471,2017</description>
<size>45089461497</size>
</item><item>
<title>MICCAI 2015 Challenge on Multimodal Brain Tumor Segmentation (BraTS2015)</title>
<category>Dataset</category>
<infohash>c4f39a0a8e46e8d2174b8a8a81b9887150f44d50</infohash>
<guid>https://academictorrents.com/details/c4f39a0a8e46e8d2174b8a8a81b9887150f44d50</guid>
<link>https://academictorrents.com/details/c4f39a0a8e46e8d2174b8a8a81b9887150f44d50</link>
<description>Brain tumor image data used in this article were obtained from the MICCAI Challenge on Multimodal Brain Tumor Segmentation. The challenge database contain fully anonymized images from the Cancer Imaging Archive. 1 for necrosis 2 for edema 3 for non-enhancing tumor 4 for enhancing tumor 0 for everything else     here are 3 requirements for the successfull upload and validation of your segmentation: Use the MHA filetype to store your segmentations (not mhd) [use short or ushort if you experience any upload problems] Keep the same labels as the provided truth.mha (see above) Name your segmentations according to this template: VSD.your_description.###.mha replace the ### with the ID of the corresponding Flair MR images. This allows the system to relate your segmentation to the correct training truth. Download an example list for the training data and testing data.     ![](https://i.imgur.com/umg5BKD.png) ### Publications B. H. Menze et al., "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)," in IEEE Transactions on Medical Imaging, vol. 34, no. 10, pp. 1993-2024, Oct. 2015. doi: 10.1109/TMI.2014.2377694 http://ieeexplore.ieee.org/document/6975210/ Kistler et. al, The virtual skeleton database: an open access repository for biomedical research and collaboration. JMIR, 2013.</description>
<size>5340438240</size>
</item><item>
<title>Non-Small Cell Lung Cancer CT Scan Dataset (NSCLC-Radiomics-Genomics)</title>
<category>Dataset</category>
<infohash>95b58ebfc1952780cfe2102dd7290889feefad66</infohash>
<guid>https://academictorrents.com/details/95b58ebfc1952780cfe2102dd7290889feefad66</guid>
<link>https://academictorrents.com/details/95b58ebfc1952780cfe2102dd7290889feefad66</link>
<description>This collection contains images from 89 non-small cell lung cancer (NSCLC) patients that were treated with surgery. For these patients pretreatment CT scans, gene expression, and clinical data are available. This dataset refers to the Lung3 dataset of the study published in Nature Communications. In short, this publication applies a radiomic approach to computed tomography data of 1,019 patients with lung or head-and-neck cancer. Radiomics refers to the comprehensive quantification of tumour phenotypes by applying a large number of quantitative image features. In present analysis 440 features quantifying tumour image intensity, shape and texture, were extracted.  We found that a large number of radiomic features have prognostic power in independent data sets, many of which were not identified as significant before. Radiogenomics analysis revealed that a prognostic radiomic signature, capturing intra-tumour heterogeneity, was associated with underlying gene-expression patterns. These data suggest that radiomics identifies a general prognostic phenotype existing in both lung and head-and-neck cancer. This may have a clinical impact as imaging is routinely used in clinical practice, providing an unprecedented opportunity to improve decision-support in cancer treatment at low cost. The dataset described here (Lung3) was used to investigate the association of radiomic imaging features with gene-expression profiles. The Lung2 dataset used for training the radiomic biomarker and consisting of 422 NSCLC CT scans with outcome data can be found here: NSCLC-Radiomics. For scientific inquiries about this dataset, please contact Dr. Hugo Aerts of the Dana-Farber Cancer Institute / Harvard Medical School (hugo_aerts@dfci.harvard.edu). Gene-expression Data</description>
<size>4522256159</size>
</item><item>
<title>Still Box</title>
<category>Dataset</category>
<infohash>4d3a60ad3c9ceac7662735ba8e90fb467b43a3aa</infohash>
<guid>https://academictorrents.com/details/4d3a60ad3c9ceac7662735ba8e90fb467b43a3aa</guid>
<link>https://academictorrents.com/details/4d3a60ad3c9ceac7662735ba8e90fb467b43a3aa</link>
<description/>
<size>42746571413</size>
</item><item>
<title>Ischemic Stroke Lesion Segmentation Challenge 2017 (ISLES2017)</title>
<category>Dataset</category>
<infohash>5bdb401695ad36d4ccd73da90c2f9f8ab6f82092</infohash>
<guid>https://academictorrents.com/details/5bdb401695ad36d4ccd73da90c2f9f8ab6f82092</guid>
<link>https://academictorrents.com/details/5bdb401695ad36d4ccd73da90c2f9f8ab6f82092</link>
<description>Ischemic Stroke Lesion Segmentation (ISLES), a medical image segmentation challenge at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2017. On the SMIR, you can register for the challenge, download the test data and submit your results. For more information, visit the official ISLES homepage under www.isles-challenge.org. ### THE ISLES CHALLENGE This challenge for stroke lesions segmentation has been very popular the past two years (2015, 2016) and yielded various methods, that help to tackle important challenges of modern stroke imaging analysis. This year the challenge provides acute stroke imaging scans and manually outlined lesions on follow-up scans. ### HOW IT WORKS If you are interested in participating, you are invited to download the training set, including both MRI scans as well as the corresponding expert segmentations of stroke lesions. This will allow you to validate and optimise your method as much as you favour. Shortly before MICCAI 2017 will take place, a set of test cases will be released of which participants will be asked to run their algorithm on and upload their segmentation results in form of binary image maps. To complete a successful participation, participants will need to submit an abstract, describing the employed method. The organizers will then evaluate each case and establish a ranking of the participating teams. All results will be presented during SWITCH at MICCAI 2017 and will be discussed with invited experts and all workshop attendees. Each team will have the opportunity to present their submitted method as a poster, while selected teams will be asked to give a brief presentation detailing their approach. Eventually, submissions will be included in the workshops LNCS post-proceedings and potentially compiled for a high-impact journal paper to summarise and present the findings. ### Please cite the challenge article if you use the data: Oskar Maier et al. ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI Medical Image Analysis, Available online 21 July 2016, ISSN 1361-8415 http://dx.doi.org/10.1016/j.media.2016.07.009. Kistler et al. The virtual skeleton database: an open access repository for biomedical research and collaboration. JMIR, 2013 http://doi.org//10.2196/jmir.2930</description>
<size>1403654243</size>
</item><item>
<title>NIH Pancreas-CT Dataset</title>
<category>Dataset</category>
<infohash>80ecfefcabede760cdbdf63e38986501f7becd49</infohash>
<guid>https://academictorrents.com/details/80ecfefcabede760cdbdf63e38986501f7becd49</guid>
<link>https://academictorrents.com/details/80ecfefcabede760cdbdf63e38986501f7becd49</link>
<description>### Summary The National Institutes of Health Clinical Center performed 82 abdominal contrast enhanced 3D CT scans (~70 seconds after intravenous contrast injection in portal-venous) from 53 male and 27 female subjects.  Seventeen of the subjects are healthy kidney donors scanned prior to nephrectomy.  The remaining 65 patients were selected by a radiologist from patients who neither had major abdominal pathologies nor pancreatic cancer lesions.  Subjects  ages range from 18 to 76 years with a mean age of 46.8 ± 16.7. The CT scans have resolutions of 512x512 pixels with varying pixel sizes and slice thickness between 1.5 − 2.5 mm, acquired on Philips and Siemens MDCT scanners (120 kVp tube voltage). A medical student manually performed slice-by-slice segmentations of the pancreas as ground-truth and these were verified/modified by an experienced radiologist. The images were processed into nii files using the following script:     for i in  ls . | grep PAN ; do echo $i; dcm2niix -vox 1 -z y -o ./data/ -m y -s y -f %n $i done     ### Citation Roth HR, Lu L, Farag A, Shin H-C, Liu J, Turkbey EB, Summers RM. DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation. N. Navab et al. (Eds.): MICCAI 2015, Part I, LNCS 9349, pp. 556–564, 2015. ### Examples ![](https://i.imgur.com/4aZNgw6.gifv) ![](https://i.imgur.com/kfhhH7x.png) ![](https://i.imgur.com/kGbz9hl.png)</description>
<size>4863883044</size>
</item><item>
<title>N+1 fish, N+2 fish dataset (train_videos)</title>
<category>Dataset</category>
<infohash>d232b4221119f034e5fdce2d1e5c16c142c2180d</infohash>
<guid>https://academictorrents.com/details/d232b4221119f034e5fdce2d1e5c16c142c2180d</guid>
<link>https://academictorrents.com/details/d232b4221119f034e5fdce2d1e5c16c142c2180d</link>
<description>Our video data was collected for an ongoing monitoring project involving our partners at The Nature Conservancy-Massachusetts (TNC) and the Gulf of Maine Research Institute (GMRI). We worked with the fishermen to create a dataset of video footage from different boats that can be released to the public. The video data is in the form of standard and well supported MP4 v2 format with H.264 compression. These videos are on average 35 minutes long at a resolution of 640x480 and and vary between 150MB to 550MB in size. We will also provide cropped fish images to aid in training complementary models. Our annotations will be categorial with each fish having a single species. There are six species of interest for this competition, which appear in a non-balanced proportion. As we aim to achieve performance on each of these species, we will be compiling a data set that is mostly balanced across these species. Complicating factors include different catch distributions across different vessels, which can have an adverse performance on the types of algorithms submitted if people attempt to game the system by parameterizing these distributions. We are attempting to balance the data set by providing at least 100 unique fish for each species within the videos from multiple vessels. ![](https://i.imgur.com/qHK6WUz.png) ![](https://i.imgur.com/T7UDAJA.png)</description>
<size>68734697745</size>
</item><item>
<title>N+1 fish, N+2 fish dataset (test_videos)</title>
<category>Dataset</category>
<infohash>6fc9279d862d4f6e42ec2613c5b5ceea165cff00</infohash>
<guid>https://academictorrents.com/details/6fc9279d862d4f6e42ec2613c5b5ceea165cff00</guid>
<link>https://academictorrents.com/details/6fc9279d862d4f6e42ec2613c5b5ceea165cff00</link>
<description>Our video data was collected for an ongoing monitoring project involving our partners at The Nature Conservancy-Massachusetts (TNC) and the Gulf of Maine Research Institute (GMRI). We worked with the fishermen to create a dataset of video footage from different boats that can be released to the public. The video data is in the form of standard and well supported MP4 v2 format with H.264 compression. These videos are on average 35 minutes long at a resolution of 640x480 and and vary between 150MB to 550MB in size. We will also provide cropped fish images to aid in training complementary models. Our annotations will be categorial with each fish having a single species. There are six species of interest for this competition, which appear in a non-balanced proportion. As we aim to achieve performance on each of these species, we will be compiling a data set that is mostly balanced across these species. Complicating factors include different catch distributions across different vessels, which can have an adverse performance on the types of algorithms submitted if people attempt to game the system by parameterizing these distributions. We are attempting to balance the data set by providing at least 100 unique fish for each species within the videos from multiple vessels. ![](https://i.imgur.com/qHK6WUz.png) ![](https://i.imgur.com/T7UDAJA.png)</description>
<size>32929954363</size>
</item><item>
<title>Draft-of-the-Climate-Science-Special-Report.pdf</title>
<category>Paper</category>
<infohash>5e2afb83a2f37394fffd95a78406f80a81058116</infohash>
<guid>https://academictorrents.com/details/5e2afb83a2f37394fffd95a78406f80a81058116</guid>
<link>https://academictorrents.com/details/5e2afb83a2f37394fffd95a78406f80a81058116</link>
<description>This is the 2017 "U.S. GLOBAL CHANGE RESEARCH PROGRAM CLIMATE SCIENCE SPECIAL REPORT (CSSR)" Third Order Draft. It leaked prior to agency approval. This is sources from The New York Times.</description>
<size>53670206</size>
</item><item>
<title>Richard Feynman's Lectures on Physics (The Messenger Lectures)</title>
<category>Course</category>
<infohash>c5af268ec55cf2d3b439e7311ad43101ba8322eb</infohash>
<guid>https://academictorrents.com/details/c5af268ec55cf2d3b439e7311ad43101ba8322eb</guid>
<link>https://academictorrents.com/details/c5af268ec55cf2d3b439e7311ad43101ba8322eb</link>
<description>Volume I - mainly mechanics, radiation, and heat Volume II - mainly electromagnetism and matter Volume III - quantum mechanics</description>
<size>1073900292</size>
</item><item>
<title>AVA: A Large-Scale Database for Aesthetic Visual Analysis</title>
<category>Dataset</category>
<infohash>71631f83b11d3d79d8f84efe0a7e12f0ac001460</infohash>
<guid>https://academictorrents.com/details/71631f83b11d3d79d8f84efe0a7e12f0ac001460</guid>
<link>https://academictorrents.com/details/71631f83b11d3d79d8f84efe0a7e12f0ac001460</link>
<description>Aesthetic Visual Analysis (AVA) contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style for high-level image quality categorization.</description>
<size>33142609854</size>
</item><item>
<title>Procedural Human Action Videos - Procedural Video Parameters</title>
<category>Dataset</category>
<infohash>935cb0f8ed8e4d3268bd96c4e174c60c33b8509f</infohash>
<guid>https://academictorrents.com/details/935cb0f8ed8e4d3268bd96c4e174c60c33b8509f</guid>
<link>https://academictorrents.com/details/935cb0f8ed8e4d3268bd96c4e174c60c33b8509f</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Procedural Video Parameters (PVP) annotations that give extended information about the contents of these videos from the viewpoint of the video generator software. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>3057674242</size>
</item><item>
<title>Procedural Human Action Videos - Raw Frames</title>
<category>Dataset</category>
<infohash>eb5f83e9da7873d66f61c8c4215564b2e2531571</infohash>
<guid>https://academictorrents.com/details/eb5f83e9da7873d66f61c8c4215564b2e2531571</guid>
<link>https://academictorrents.com/details/eb5f83e9da7873d66f61c8c4215564b2e2531571</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the raw RGB output of the generator, without applying any post-processing effects. We provide this data modality for completeness only. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>711206440219</size>
</item><item>
<title>Procedural Human Action Videos - Textual Annotations</title>
<category>Dataset</category>
<infohash>b678dcd3577231cd0844b1e01f675fe0091b50f1</infohash>
<guid>https://academictorrents.com/details/b678dcd3577231cd0844b1e01f675fe0091b50f1</guid>
<link>https://academictorrents.com/details/b678dcd3577231cd0844b1e01f675fe0091b50f1</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains textual annotations about the contents of these videos, including: - extrinsic.txt: Extrinsic camera parameters - 3dpose.txt:    Actor location in camera coordinates - bbox2d.txt:    2D bounding boxes in screen coordinates - bbox3d.txt:    3D bounding boxes in world coordinates - colors.txt:    Color palette for pixel classes in semantic segmentation - instances.txt: Color palette for pixel classes in instance segmentation - general.txt:   Intrinsic camera parameters and other general information - joints.txt:    Body joint locations in screen coordinates (pose) - muscles.txt:   Physical properties of body muscles for the main actor A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>18133419107</size>
</item><item>
<title>Procedural Human Action Videos - Instance Segmentation</title>
<category>Dataset</category>
<infohash>109e763bb419a96165494bf50b60a39df38f4720</infohash>
<guid>https://academictorrents.com/details/109e763bb419a96165494bf50b60a39df38f4720</guid>
<link>https://academictorrents.com/details/109e763bb419a96165494bf50b60a39df38f4720</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Pixel-wise Semantic Segmentation Instance Ground Truth data modality for these videos. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>14330655117</size>
</item><item>
<title>Procedural Human Action Videos - Ground Truth Optical Flow</title>
<category>Dataset</category>
<infohash>52719e28efd10da65fa9ed1539792988e0f5bcf3</infohash>
<guid>https://academictorrents.com/details/52719e28efd10da65fa9ed1539792988e0f5bcf3</guid>
<link>https://academictorrents.com/details/52719e28efd10da65fa9ed1539792988e0f5bcf3</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Horizontal and Vertical Optical Flow Ground Truth data modality of these videos. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>19310076969</size>
</item><item>
<title>Procedural Human Action Videos - Depth Maps</title>
<category>Dataset</category>
<infohash>eb1ae18213ba0fb97e33ed773eea900a8374efd0</infohash>
<guid>https://academictorrents.com/details/eb1ae18213ba0fb97e33ed773eea900a8374efd0</guid>
<link>https://academictorrents.com/details/eb1ae18213ba0fb97e33ed773eea900a8374efd0</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Depth Map data modality of these videos. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>124045814703</size>
</item><item>
<title>Procedural Human Action Videos - Semantic Segmentation</title>
<category>Dataset</category>
<infohash>67c77d6a9363e640c8a0ce7f7d2a028f50a9cd55</infohash>
<guid>https://academictorrents.com/details/67c77d6a9363e640c8a0ce7f7d2a028f50a9cd55</guid>
<link>https://academictorrents.com/details/67c77d6a9363e640c8a0ce7f7d2a028f50a9cd55</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Pixel-wise Semantic Segmentation Class Ground Truth data modality of these videos. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>41868378787</size>
</item><item>
<title>Procedural Human Action Videos - Post-processed RGB Frames</title>
<category>Dataset</category>
<infohash>7a8b49530d40331d4fbdf0511844d52996683196</infohash>
<guid>https://academictorrents.com/details/7a8b49530d40331d4fbdf0511844d52996683196</guid>
<link>https://academictorrents.com/details/7a8b49530d40331d4fbdf0511844d52996683196</link>
<description>Procedural Human Action Videos (PHAV) &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This torrent contains the Procedural Human Action Videos (PHAV) dataset. This version of the dataset contains 39,982 videos that have been generated by a computer. The dataset is made available under the Creative Commons BY-NC-SA (Attribution-NonCommercial-ShareAlike 4.0 International) license. This sub- directory contains the Post-processed RGB data modality of these videos. A human-readable version of the license terms is available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/ The full license terms are available at: - https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode By using PHAV, you agree to abide to the terms of the Creative Commons BY-NC-SA license. If you do not agree with the terms of this license, please contact the dataset authors for other licensing options. For more information, including details about how to parse the different files included in this release, how to use them in your experiments, and how to cite the PHAV dataset in your academical paper, please visit: - http://adas.cvc.uab.es/phav/</description>
<size>148676523138</size>
</item><item>
<title>wextractorDatasetv1</title>
<category>Dataset</category>
<infohash>9f7a0c5280d594f6c21aa3f6c2c760484a1e0edc</infohash>
<guid>https://academictorrents.com/details/9f7a0c5280d594f6c21aa3f6c2c760484a1e0edc</guid>
<link>https://academictorrents.com/details/9f7a0c5280d594f6c21aa3f6c2c760484a1e0edc</link>
<description/>
<size>83347104</size>
</item><item>
<title>New York Taxi Data 2009-2016 in Parquet Fomat</title>
<category>Dataset</category>
<infohash>4f465810b86c6b793d1c7556fe3936441081992e</infohash>
<guid>https://academictorrents.com/details/4f465810b86c6b793d1c7556fe3936441081992e</guid>
<link>https://academictorrents.com/details/4f465810b86c6b793d1c7556fe3936441081992e</link>
<description>Trip record data from the Taxi and Limousine Commission (http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml) from January 2009-December 2016 was consolidated and brought into a consistent Parquet format by Ravi Shekhar &lt;ravi dot shekhar at gmail dot com&gt;. Data is released under the New York Open Data Law.</description>
<size>35078948106</size>
</item><item>
<title>Small Object Dataset</title>
<category>Dataset</category>
<infohash>8e751c111cf90123374b5f0cf61e6af9f5e5231e</infohash>
<guid>https://academictorrents.com/details/8e751c111cf90123374b5f0cf61e6af9f5e5231e</guid>
<link>https://academictorrents.com/details/8e751c111cf90123374b5f0cf61e6af9f5e5231e</link>
<description>Images of small objects for small instance detections.  Currently four object types are available. ![](http://visal.cs.cityu.edu.hk/wp/wp-content/uploads/smallobject.jpg) We collect four datasets of small objects from images/videos on the Internet (e.g.YouTube or Google). Fly Dataset: contains 600 video frames with an average of 86 ± 39 flies per frame (648×72 @ 30 fps). 32 images are used for training (1:6:187) and 50 images for testing (301:6:600). Honeybee Dataset: contains 118 images with an average of 28 ± 6 honeybees per image (640×480). The dataset is divided evenly for training and test sets. Only the first 32 images are used for training. Fish Dataset: contains 387 frames of video with an average of 56±9 fish per frame (300×410 @ 30 fps). 32 images are used for training (1:3:94) and 65 for testing (193:3:387). Seagull Dataset: contains three high-resolution images (624×964) with an average of 866±107 seagulls per image. The first image is used for training, and the rest for testing. Cite this paper: http://visal.cs.cityu.edu.hk/static/pubs/conf/cvpr15-densdet.pdf</description>
<size>5858609</size>
</item><item>
<title>Downsampled ImageNet 32x32</title>
<category>Dataset</category>
<infohash>bf62f5051ef878b9c357e6221e879629a9b4b172</infohash>
<guid>https://academictorrents.com/details/bf62f5051ef878b9c357e6221e879629a9b4b172</guid>
<link>https://academictorrents.com/details/bf62f5051ef878b9c357e6221e879629a9b4b172</link>
<description>This page includes downsampled ImageNet images, which can be used for density estimation and generative modeling experiments. Images come in two resolutions: 32x32 and 64x64, and were introduced in Pixel Recurrent Neural Networks. Please refer to the Pixel RNN paper for more details and results. ![](https://i.imgur.com/s6gdDuX.jpg)</description>
<size>4274493440</size>
</item><item>
<title>Downsampled ImageNet 64x64</title>
<category>Dataset</category>
<infohash>96816a530ee002254d29bf7a61c0c158d3dedc3b</infohash>
<guid>https://academictorrents.com/details/96816a530ee002254d29bf7a61c0c158d3dedc3b</guid>
<link>https://academictorrents.com/details/96816a530ee002254d29bf7a61c0c158d3dedc3b</link>
<description>This page includes downsampled ImageNet images, which can be used for density estimation and generative modeling experiments. Images come in two resolutions: 32x32 and 64x64, and were introduced in Pixel Recurrent Neural Networks. Please refer to the Pixel RNN paper for more details and results. ![](https://i.imgur.com/s6gdDuX.jpg)</description>
<size>12589844480</size>
</item><item>
<title>Test Torrent</title>
<category>Dataset</category>
<infohash>d984f67af9917b214cd8b6048ab5624c7df6a07a</infohash>
<guid>https://academictorrents.com/details/d984f67af9917b214cd8b6048ab5624c7df6a07a</guid>
<link>https://academictorrents.com/details/d984f67af9917b214cd8b6048ab5624c7df6a07a</link>
<description>This torrent is used for testing BitTorrent clients with Academic Torrents</description>
<size>19296724</size>
</item><item>
<title>Human acute monocytic leukemia</title>
<category>Dataset</category>
<infohash>8464b9f9166c143040fee655f0284085fe251a80</infohash>
<guid>https://academictorrents.com/details/8464b9f9166c143040fee655f0284085fe251a80</guid>
<link>https://academictorrents.com/details/8464b9f9166c143040fee655f0284085fe251a80</link>
<description>Complete dataset of the imaging flow cytometry of the human acute monocytic leukemia (THP-1) cells acquired by ultrafast optical time-stretch microscopy technique. Published in: Antony C. S. Chan, Ho-Cheung Ng, Sharat C. V. Bogaraju, Hayden K. H. So, Edmund Y. Lam &amp; Kevin K. Tsia, "All-passive pixel super-resolution of time-stretch imaging" Scientific Reports 7, 44608 (2017) http://dx.doi.org/10.1038/srep44608 Preprint: https://arxiv.org/abs/1610.05802 To access the serial-temporal line-scans of the cellular images in MATLAB:</description>
<size>1609437908</size>
</item><item>
<title>Udacity / Didi $100k Competition Calibration Dataset</title>
<category>Dataset</category>
<infohash>ffb0db5195d5e53041da6a7e168ce930987bc2ea</infohash>
<guid>https://academictorrents.com/details/ffb0db5195d5e53041da6a7e168ce930987bc2ea</guid>
<link>https://academictorrents.com/details/ffb0db5195d5e53041da6a7e168ce930987bc2ea</link>
<description/>
<size>13594418058</size>
</item><item>
<title>reddit_data</title>
<category>Dataset</category>
<infohash>85a5bd50e4c365f8df70240ffd4ecc7dec59912b</infohash>
<guid>https://academictorrents.com/details/85a5bd50e4c365f8df70240ffd4ecc7dec59912b</guid>
<link>https://academictorrents.com/details/85a5bd50e4c365f8df70240ffd4ecc7dec59912b</link>
<description>Reddit Comments from 2005-12 to 2017-03 Downloaded from https://files.pushshift.io/comments. You can find a current list of SHA-sums there to verify this torrent s downloads. Intended use is for scientific / non-commercial purposes. Example code to work with the data: https://github.com/dewarim/reddit-data-tools</description>
<size>304759701602</size>
</item><item>
<title>Didi Data Release #2 - Round 1 Test Sequence and Training</title>
<category>Dataset</category>
<infohash>18d7f6be647eb6d581f5ff61819a11b9c21769c7</infohash>
<guid>https://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7</guid>
<link>https://academictorrents.com/details/18d7f6be647eb6d581f5ff61819a11b9c21769c7</link>
<description>Udacity is using a new dataset production method that allows for quick processing and release cycles. Instead of spending weeks (or months) waiting on 3D annotation data to be produced by third-party companies, we have elected to try out something new that enables datasets to be released immediately after they are recorded. While we do lose some sample distribution on each individual dataset due to the same obstacles being used for each session, the massive speedup in production and reduction in cost allows us to release new datasets daily (and with different obstacles with each session). In this manner, we can directly control the type of data being recorded so that we can cover all situations without hoping for them to happen on real roads, and we have extreme precision on obstacle location with differential RTK GPS technology. Due to this new approach, there are some major differences from the Kitti datasets. It is important to note that recorded positions are recorded with respect to the base station, not the capture vehicle. The NED positions in the  rtkfix  topic are therefore in relation to a FIXED POINT, NOT THE CAPTURE OR OBSTACLE VEHICLES. The relative positions can be calculated easily, as the NED frame is cartesian space, not polar. The single obstacle vehicle in this dataset is located in the  obstacle/obs1/rear  topic namespace. Orientation of obstacles are not evaluated in Round 1, but will be evaluated in Round 2. The pose section of the ROS bags included in this release IS NOT A VALID QUATERNION, and does not represent either the pose of the capture vehicle or the obstacle. However, in this dataset, we have included an additional GPS antenna mounted on the rear of the capture vehicle to get a proper orientation. The tracklet generation code (link below) is currently being modified to translate the XML files into the proper vehicle frame with the capture vehicle orientation. Since this is open source code, we welcome your contributions and are looking forward to accepting Pull Requests. Metadata about each obstacle (length, width, height, GPS antenna location as measured from the rear/left/ground) is included in each obstacle data directory. Tracklet file generation code, as well as sensor transforms/URDF files are available at this repository: https://github.com/udacity/didi-competition This release requires running a ROS Velodyne driver for a HDL-32E to decode  /velodyne_packets  into  /velodyne_points . The ROI for the captured camera imagery has also been enlarged at the community request to provide more data. Metadata for the obstacle has also been made available for Round 1.</description>
<size>21929778522</size>
</item><item>
<title>VGG Cell Dataset from Learning To Count Objects in Images  </title>
<category>Dataset</category>
<infohash>b32305598175bb8e03c5f350e962d772a910641c</infohash>
<guid>https://academictorrents.com/details/b32305598175bb8e03c5f350e962d772a910641c</guid>
<link>https://academictorrents.com/details/b32305598175bb8e03c5f350e962d772a910641c</link>
<description>![](https://i.imgur.com/ydlsPEh.png) We generated a dataset of  200 images, and used random subsets of the first 100 images to perform training and parameter validations, and the second 100 images to test the counting accuracy. Below, we show some representative results for cell counting for the previously unseen images ### Acknowledgements This work is a part of the EU VisRec project (ERC grant VisRec no. 228180).</description>
<size>16339802</size>
</item><item>
<title>A collection of sport activity datasets for data analysis and data mining 2017a</title>
<category>Dataset</category>
<infohash>f2221a292540ff3e6c85025754f775361c7cd886</infohash>
<guid>https://academictorrents.com/details/f2221a292540ff3e6c85025754f775361c7cd886</guid>
<link>https://academictorrents.com/details/f2221a292540ff3e6c85025754f775361c7cd886</link>
<description/>
<size>789140302</size>
</item><item>
<title>GoogleNews-vectors-negative300.bin.gz - Efficient estimation of word representations in vector space</title>
<category>Dataset</category>
<infohash>2aa0d0c6aff92f08719e409db04ecee4721cf21f</infohash>
<guid>https://academictorrents.com/details/2aa0d0c6aff92f08719e409db04ecee4721cf21f</guid>
<link>https://academictorrents.com/details/2aa0d0c6aff92f08719e409db04ecee4721cf21f</link>
<description/>
<size>1647046227</size>
</item><item>
<title>Udacity Didi $100k Challenge Dataset 1</title>
<category>Dataset</category>
<infohash>76352487923a31d47a6029ddebf40d9265e770b5</infohash>
<guid>https://academictorrents.com/details/76352487923a31d47a6029ddebf40d9265e770b5</guid>
<link>https://academictorrents.com/details/76352487923a31d47a6029ddebf40d9265e770b5</link>
<description>First Full Dataset Release - Udacity/Didi $100k Challenge One of the most important aspects of operating an autonomous vehicle is understanding the surrounding environment in order to make safe decisions. Udacity and Didi Chuxing are partnering together to provide incentive for students to come up with the best way to detect obstacles using camera and LIDAR data. This challenge will allow for pedestrian, vehicle, and general obstacle detection that is useful to both human drivers and self-driving car systems. Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art. Specifically, students will be competing against each other in the Kitti Object Detection Evaluation Benchmark. While a current leaderboard exists for academic publications, Udacity and Didi will be hosting our own leaderboard specifically for this challenge, and we will be using the standard object detection development kit that enables us to evaluate approaches as they are done in academia and industry. IMPORTANT NOTICE There are some major differences between this Udacity dataset and the Kitti datasets. It is important to note that recorded positions are recorded with respect to the base station, not the capture vehicle. The NED positions in the ‘rtkfix’ topic are therefore in relation to a FIXED POINT, NOT THE CAPTURE OR OBSTACLE VEHICLES. The relative positions can be calculated easily, as the NED frame is cartesian space, not polar. The XML tracklet files will, however, be in the frame of the capture vehicle. This means that the capture vehicle is also included in the recorded positions, and is denoted by the ROS topic  /gps/rtkfix  in this first dataset. The single obstacle vehicle in this dataset is located in the  obs1/  topic namespace, but this will be changed to  /obstacles/obstacle_name  in future releases to accommodate the creation of XML tracklet files for multiple obstacles. Orientation of obstacles are not evaluated in Round 1, but will be evaluated in Round 2. The pose section of the ROS bags included in this release IS NOT A VALID QUATERNION, and does not represent either the pose of the capture vehicle or the obstacle. There is no XML tracklet file included with these datasets. They will be released as soon as they are available, in conjunction with the opening of the online leaderboard.</description>
<size>32801992154</size>
</item><item>
<title>OpenDota - All Matches from March 2016 - Player Matches</title>
<category>Dataset</category>
<infohash>1a0c5736bb54610ad00a45306df2b33628301409</infohash>
<guid>https://academictorrents.com/details/1a0c5736bb54610ad00a45306df2b33628301409</guid>
<link>https://academictorrents.com/details/1a0c5736bb54610ad00a45306df2b33628301409</link>
<description>This is a data dump of all the parsed and unparsed Dota 2 matches from OpenDota (formerly yasp.co) as of March 2016. This is 1,191,768,403 matches. This is part 3 of three files (matches and match_skill are the other two). This file includes all player performance data in each match. Join with matches data.</description>
<size>542664372161</size>
</item><item>
<title>OpenDota - All Matches from March 2016 - Match Skill</title>
<category>Dataset</category>
<infohash>c41059d1fd3b9a0f3dfec769ac2d6468c0e955b8</infohash>
<guid>https://academictorrents.com/details/c41059d1fd3b9a0f3dfec769ac2d6468c0e955b8</guid>
<link>https://academictorrents.com/details/c41059d1fd3b9a0f3dfec769ac2d6468c0e955b8</link>
<description>This is a data dump of all the parsed and unparsed Dota 2 matches from OpenDota (formerly yasp.co) as of March 2016. This is 1,191,768,403 matches. This is part 2 of three files (matches and player_matches are the other two). This file includes skill data (normal, high, very high) as given from the Dota 2 webAPI. Join with matches.</description>
<size>1721815370</size>
</item><item>
<title>OpenDota - All Matches from March 2016 - Matches</title>
<category>Dataset</category>
<infohash>0ddf777978c0669b52fadd1baa9e256a6d8b3996</infohash>
<guid>https://academictorrents.com/details/0ddf777978c0669b52fadd1baa9e256a6d8b3996</guid>
<link>https://academictorrents.com/details/0ddf777978c0669b52fadd1baa9e256a6d8b3996</link>
<description>This is a data dump of all the parsed and unparsed Dota 2 matches from OpenDota (formerly yasp.co) as of March 2016. This is 1,191,768,403 matches. This is part 1 of three files (players_matches and match_skill are the other two). This file includes match metadata.</description>
<size>155941971528</size>
</item><item>
<title>Social Media Analyzing.ova</title>
<category>Dataset</category>
<infohash>5c7d429c9991bf87fea35feef68889eada4a3425</infohash>
<guid>https://academictorrents.com/details/5c7d429c9991bf87fea35feef68889eada4a3425</guid>
<link>https://academictorrents.com/details/5c7d429c9991bf87fea35feef68889eada4a3425</link>
<description>This is a project on Social Media Sentiment Analysis using Hortonworks Sandbox following the procedure provided at https://hortonworks.com/hadoop-tutorial/how-to-refine-and-visualize-sentiment-data/ website. The default username and password is root and clickstream respectively. Any BI tool can be used but I recommend Tableau which can be downloaded from https://www.tableau.com/ website. Any user can contact me at cmdude16@gmail.com for further guidance.</description>
<size>15408308736</size>
</item><item>
<title>UC Berkeley Computer Science Courses (Full Collection)</title>
<category>Course</category>
<infohash>5e84be34f69b1a313f6dcb51667edf238d5d4412</infohash>
<guid>https://academictorrents.com/details/5e84be34f69b1a313f6dcb51667edf238d5d4412</guid>
<link>https://academictorrents.com/details/5e84be34f69b1a313f6dcb51667edf238d5d4412</link>
<description>An archive of UC Berkeley Computer Science Courses</description>
<size>446277845917</size>
</item><item>
<title>Public Health 241, 001 - Spring 2011 - UC Berkeley</title>
<category>Course</category>
<infohash>8469a366bf4b71ff62d9e2327537771bdc145dfa</infohash>
<guid>https://academictorrents.com/details/8469a366bf4b71ff62d9e2327537771bdc145dfa</guid>
<link>https://academictorrents.com/details/8469a366bf4b71ff62d9e2327537771bdc145dfa</link>
<description>Biostatistical concepts and modeling relevant to the design and analysis of multifactor population-based cohort and case-control studies, including matching. Measures of association, causal inference, confounding interaction. Introduction to binary regression, including logistic regression.</description>
<size>1874365386</size>
</item><item>
<title>Law 271, Environmental Law and Policy - Fall 2009 - UC Berkeley</title>
<category>Course</category>
<infohash>c878ea12eff8c99cc6b1983a2d7724a18bb1a94d</infohash>
<guid>https://academictorrents.com/details/c878ea12eff8c99cc6b1983a2d7724a18bb1a94d</guid>
<link>https://academictorrents.com/details/c878ea12eff8c99cc6b1983a2d7724a18bb1a94d</link>
<description>This introductory course is designed to explore fundamental legal and policy issues in environmental law. Through examination of environmental common law and key federal environmental statutes, including the National Environmental Policy Act, Clean Air Act, and Clean Water Act, it exposes students to the major challenges to environmental law and the principal approaches to meeting those challenges, including litigation, command and control regulation, technology forcing, market incentives, and information disclosure requirements. With the addition of cross-cutting topics such as risk assessment and environmental federalism, it also gives students a grounding in how choices about regulatory standards and levels of regulatory authority are made.</description>
<size>4253328680</size>
</item><item>
<title>Peace and Conflict Studies 164B - Spring 2007 - UC Berkeley</title>
<category>Course</category>
<infohash>0ec86c151be43be4857e2da370fb5508ff418146</infohash>
<guid>https://academictorrents.com/details/0ec86c151be43be4857e2da370fb5508ff418146</guid>
<link>https://academictorrents.com/details/0ec86c151be43be4857e2da370fb5508ff418146</link>
<description>This course introduces students to a broad range of issues, concepts, and approaches integral to the study of peace and conflict. Subject areas include the war system and war prevention, conflict resolution and nonviolence, human rights and social justice, development and environmental sustainability. Required of all Peace and Conflict Studies majors.</description>
<size>5065283796</size>
</item><item>
<title>Statistics 21 - 001 - Spring 2010 - UC Berkeley</title>
<category>Course</category>
<infohash>56b38a7013673c92cb951eb79bcd3a26e8158095</infohash>
<guid>https://academictorrents.com/details/56b38a7013673c92cb951eb79bcd3a26e8158095</guid>
<link>https://academictorrents.com/details/56b38a7013673c92cb951eb79bcd3a26e8158095</link>
<description>Descriptive statistics, probability models and related concepts, sample surveys, estimates, confidence intervals, tests of significance, controlled experiments vs. observational studies, correlation and regression.</description>
<size>5575307389</size>
</item><item>
<title>Statistics 21 - Fall 2009 - UC Berkeley</title>
<category>Course</category>
<infohash>4d505fe0b3cbcbeff32bfd7b75a783f900dc8c6d</infohash>
<guid>https://academictorrents.com/details/4d505fe0b3cbcbeff32bfd7b75a783f900dc8c6d</guid>
<link>https://academictorrents.com/details/4d505fe0b3cbcbeff32bfd7b75a783f900dc8c6d</link>
<description>Descriptive statistics, probability models and related concepts, sample surveys, estimates, confidence intervals, tests of significance, controlled experiments vs. observational studies, correlation and regression.</description>
<size>4268779318</size>
</item><item>
<title>International and Area Studies 107, 001 - Spring 2011 - UC Berkeley</title>
<category>Course</category>
<infohash>97e704bba2ad3fb2dc5d932f4ed693fcb2f85b30</infohash>
<guid>https://academictorrents.com/details/97e704bba2ad3fb2dc5d932f4ed693fcb2f85b30</guid>
<link>https://academictorrents.com/details/97e704bba2ad3fb2dc5d932f4ed693fcb2f85b30</link>
<description>This course is designed as a comprehensive overview of intermediate macroeconomic theory focusing on economic growth and international economics. It covers a number of topics including history of economic growth, industrial revolution, post-industrial revolution divergence, flexible-price and sticky-price macroeconomics, and macroeconomic policy. Course is structured for majors in International and Area Studies and other non-economic social science majors.</description>
<size>1643354598</size>
</item><item>
<title>Multivariable Calculus - Math 53 - Fall 2009 - UC Berkeley</title>
<category>Course</category>
<infohash>d90733721eb2a2ba839434decce91ce4803cbf1e</infohash>
<guid>https://academictorrents.com/details/d90733721eb2a2ba839434decce91ce4803cbf1e</guid>
<link>https://academictorrents.com/details/d90733721eb2a2ba839434decce91ce4803cbf1e</link>
<description>Math 53 - Section 1 - Multivariable Calculus Instructor: Edward Frenkel Lectures: TT 3:30-5:00pm, Room 155 Dwinelle Course Control Number: 54296 Office: 819 Evans Office Hours: TBA Prerequisites: Math 1A, 1B. Required Text: Stewart, Multivariable Calculus, (custom edition). Recommended Reading: Syllabus: Course Webpage: To be linked from http://math.berkeley.edu/~frenkel/Math53 Grading: 25% quizzes and HW, 20% each midterm, 35% final Homework: Homework for the entire course will be assigned at the beginning of the semester, and weekly homework will be due at the beginning of each week. Comments: Students have to make sure that they have no scheduling conflicts with the final exam. Missing final exam means automatic Fail grade for the entire course.</description>
<size>5840425737</size>
</item><item>
<title>Political Science 179 - Spring 2008 - UC Berkeley</title>
<category>Course</category>
<infohash>e75a329db4adabdc45502c401a1c4b69712cbb98</infohash>
<guid>https://academictorrents.com/details/e75a329db4adabdc45502c401a1c4b69712cbb98</guid>
<link>https://academictorrents.com/details/e75a329db4adabdc45502c401a1c4b69712cbb98</link>
<description>Political issues facing the state of California, the United States, or the international community.</description>
<size>1496266068</size>
</item><item>
<title>Chemistry 1A, 002 - Spring 2010 - UC Berkeley</title>
<category>Course</category>
<infohash>ebe62de0d85ba9563c566cc5b082416792bc00ca</infohash>
<guid>https://academictorrents.com/details/ebe62de0d85ba9563c566cc5b082416792bc00ca</guid>
<link>https://academictorrents.com/details/ebe62de0d85ba9563c566cc5b082416792bc00ca</link>
<description>Stoichiometry of chemical reactions, quantum mechanical description of atoms, the elements and periodic table, chemical bonding, real and ideal gases, thermochemistry, introduction to thermodynamics and equilibrium, acid-base and solubility equilibria, introduction to oxidation-reduction reactions, introduction to chemical kinetics.</description>
<size>9283464842</size>
</item><item>
<title>Chemistry 3B, 002 - Fall 2014 - UC Berkeley</title>
<category>Course</category>
<infohash>769ef081e79307987fd52ed97c82fe7c590c88f8</infohash>
<guid>https://academictorrents.com/details/769ef081e79307987fd52ed97c82fe7c590c88f8</guid>
<link>https://academictorrents.com/details/769ef081e79307987fd52ed97c82fe7c590c88f8</link>
<description>Conjugation, aromatic chemistry, carbonyl compounds, carbohydrates, amines, carboxylic acids, amino acids, peptides, proteins, and nucleic acid chemistry. Ultraviolet spectroscopy and mass spectrometry will be introduced.</description>
<size>4788602900</size>
</item><item>
<title>Economics 1, 001 - Fall 2011 - UC Berkeley</title>
<category>Course</category>
<infohash>93628c9e317768a5bf994eec845834d9e4a749e9</infohash>
<guid>https://academictorrents.com/details/93628c9e317768a5bf994eec845834d9e4a749e9</guid>
<link>https://academictorrents.com/details/93628c9e317768a5bf994eec845834d9e4a749e9</link>
<description>A survey of economics designed to give an overview of the field.</description>
<size>4186156273</size>
</item><item>
<title>Physics 10, 001 - Spring 2006 - UC Berkeley</title>
<category>Course</category>
<infohash>5140da14dd72b2a6f19a5ca08d2e2d015754909a</infohash>
<guid>https://academictorrents.com/details/5140da14dd72b2a6f19a5ca08d2e2d015754909a</guid>
<link>https://academictorrents.com/details/5140da14dd72b2a6f19a5ca08d2e2d015754909a</link>
<description>The most interesting and important topics in physics, stressing conceptual understanding rather than math, with applications to current events. Topics covered may vary and may include energy and conservation, radioactivity, nuclear physics, the Theory of Relativity, lasers, explosions, earthquakes, superconductors, and quantum physics.</description>
<size>4820005717</size>
</item><item>
<title>Chemical &amp; Biomolecular Engineering 179 Process Technology of Solid-State Materials Devices  - UC Berkeley</title>
<category>Course</category>
<infohash>f1baa15065060f1830d74111c1ef7741a73c9e98</infohash>
<guid>https://academictorrents.com/details/f1baa15065060f1830d74111c1ef7741a73c9e98</guid>
<link>https://academictorrents.com/details/f1baa15065060f1830d74111c1ef7741a73c9e98</link>
<description>Chemical processing and properties of solid-state materials. Crystal growth and purification. Thin film technology. Application of chemical processing to the manufacture of semiconductors and solid-state devices.</description>
<size>3641717107</size>
</item><item>
<title>Psychology 1 - General Psychology - Fall 2007 - UC Berkeley</title>
<category>Course</category>
<infohash>687bfcdf88598c04edf98c56c3b5f838d43ec2a6</infohash>
<guid>https://academictorrents.com/details/687bfcdf88598c04edf98c56c3b5f838d43ec2a6</guid>
<link>https://academictorrents.com/details/687bfcdf88598c04edf98c56c3b5f838d43ec2a6</link>
<description>Introduction to the principal areas, problems, and concepts of psychology</description>
<size>3362583340</size>
</item><item>
<title>Environmental Economics and Policy 145 - Fall 2014 - UC Berkeley</title>
<category>Course</category>
<infohash>c633df0181d560050d3f392501c6815135cfb60e</infohash>
<guid>https://academictorrents.com/details/c633df0181d560050d3f392501c6815135cfb60e</guid>
<link>https://academictorrents.com/details/c633df0181d560050d3f392501c6815135cfb60e</link>
<description>This course introduces students to key issues and findings in the field of health and environmental economics. The first half of the course focuses on the theoretical and statistical frameworks used to analyze instances of market failure in the provision of health and environmental goods. The second half focuses on policy-relevant empirical findings in the field.</description>
<size>3926455147</size>
</item><item>
<title>Nuclear Engineering 101, 001 - Fall 2014 - UC Berkeley</title>
<category>Course</category>
<infohash>92644e4132e893c70c3e0ad9ac1d58bef554bd14</infohash>
<guid>https://academictorrents.com/details/92644e4132e893c70c3e0ad9ac1d58bef554bd14</guid>
<link>https://academictorrents.com/details/92644e4132e893c70c3e0ad9ac1d58bef554bd14</link>
<description>### Course Title: Nuclear Reactions and Radiation ### Catalog Description: Energetics and kinetics of nuclear reactions and radioactive decay, fission, fusion, and reactions of energetic neutrons, properties of the fission products and the actinides; nuclear models and transition probabilities; interaction of radiation with matter. ### Course Prerequisite: Physics 7ABC Physics for scientists and engineers Prerequisite Knowledge and/or Skills: The course uses the following knowledge and skills from prerequisite and lower-division courses: - solve linear, first and second order differential equations. - understand and apply the fundamental laws of physical chemistry such as the Boltzmann distribution for particles in an ideal gas. - understand and apply the fundamentals of classical mechanics, electricity and magnetism and the elements of quantum mechanics to idealized representations of the structure of nuclei and nuclear reactions. - understand and apply the fundamental notions of probability and probability distributions. ### Course Objectives: - Provide the students with a solid understanding of the fundamentals of those aspect of low-energy nuclear physics that are most important to applications in such areas as nuclear engineering, nuclear and radiochemistry, geosciences, biotechnology, etc. ### Course Outcomes: - calculate the consequences of radioactive growth and decay and nuclear reactions. - calculate estimates of nuclear masses and energetics based on empirical data and nuclear models. - calculate estimates of the lifetimes of nuclear states that are unstable to alpha-,beta- and gamma decay and internal conversion based on the theory of simple nuclear models. - use nuclear models to predict low-energy level structure and level energies. - use nuclear models to predict the spins and parities of low-lying levels and estimate their consequences with respect to radioactive decay. - use nuclear models to understand the properties of neutron capture and the Breit-Wigner single level formula to calculate cross sections at resonance and thermal energies. - calculate the kinematics of the interaction of photons with matter and apply stopping power to determine the energy loss rate and ranges of charged particles in matter - calculate the energies of fission fragments and understand the charge and mass distributions of the fission products, and prompt neutron and gamma rays from fission ### Topics Covered: - Introduction to nuclear reactions and radioactive decay - mass and energy balances and decay modes - Nuclear and Atomic masses - empirical data and the semiempirpical mass formula - Application of the Semiempirical mass formula to determine the nuclear mass surface and the general characteristics of the energetics of alpha- and beta-decay and nuclear fission - Application of the Semiempirical mass formula to uncover empirical evidence for nuclear shell structure; the magic numbers Introduction to the facts of quantum mechanics and conserved quantities – angular momentum and parity, the Schroedinger equation and the particle in the box model - The Spherical Shell Model - particle motion , angular momentum and parity in the spherical potential well and the isotropic harmonic oscillator potentials - The Empirical Shell Model and low-lying levels of spherical and near spherical nuclei - The Electric Potential of Nuclei and Evidence for Deformed Nuclei – multipole expansion of the electric potential and empirical data on quadrupole moments - Predictions of the Quantized Rigid Rotor and Harmonic Vibrator - comparisons of the idealized models with empirical data on rotational and vibrational spectra of deformed nuclei - Alpha Decay - energetics and the decay probability in the limit of the Gamow model. Comparison of model predictions with empirical data. Alpha decay schemes - Beta Decay - beta decay, positron emission and electron capture; the Fermi theory of allowed beta decay; forbidden transitions; Fermi and Gamow-Teller decay; empirical beta decay schemes and correlations with elementary beta decay theory and spherical shell structure - Gamma Decay and Internal Conversion- multipole expansion of the radiation field and qualitative consideration of decay probabilities in the limit of the Moskowski and Weisskopf models; nuclear isomerism; internal conversion; nuclear structure and empirical data on gamma decay - Nuclear Fission - energetics and empirical data on mass distributions and shell structure, charge distribution of the fission fragments, prompt neutrons and gamma rays - Nuclear Reactions - reaction types and energetics; kinematics of two-body elastic scattering and nuclear reactions; applications to moderation of neutrons and the interaction of charged particles with matter; direct and compound nuclear reactions; resonances and physical plausibility of the form of the Breit-Wigner single level formula; the Breit-Wigner single level formula and resonances properties of neutron reactions - Introduction to the Interaction of Charged Particles with Matter; ranges of leptons and heavy charged particles in matter - Introduction to the Interaction of Photons with Matter - the Compton Effect; qualitative discussion of the effect of electron binding; pair production; macroscopic cross sections and attenuation coefficients</description>
<size>4967134337</size>
</item><item>
<title>Astronomy C12, 001 - Fall 2014 - UC Berkeley	</title>
<category>Course</category>
<infohash>1433dd89d4366df2b534c5e3b6b267776a67e7af</infohash>
<guid>https://academictorrents.com/details/1433dd89d4366df2b534c5e3b6b267776a67e7af</guid>
<link>https://academictorrents.com/details/1433dd89d4366df2b534c5e3b6b267776a67e7af</link>
<description>A tour of the mysteries and inner workings of our solar system. What are planets made of? Why do they orbit the sun the way they do? How do planets form, and what are they made of? Why do some bizarre moons have oceans, volcanoes, and ice floes? What makes the Earth hospitable for life? Is the Earth a common type of planet or some cosmic quirk? This course will introduce basic physics, chemistry, and math to understand planets, moons, rings, comets, asteroids, atmospheres, and oceans. Understanding other worlds will help us save our own planet and help us understand our place in the universe. Also listed as Letters and Science C70T and Earth and Planetary Science C12.</description>
<size>11353991797</size>
</item><item>
<title>Bioengineering 200, 001 - Spring 2014  - UC Berkeley</title>
<category>Course</category>
<infohash>c4cd3183550f9cc1cfdcb69f15b076ba439ee062</infohash>
<guid>https://academictorrents.com/details/c4cd3183550f9cc1cfdcb69f15b076ba439ee062</guid>
<link>https://academictorrents.com/details/c4cd3183550f9cc1cfdcb69f15b076ba439ee062</link>
<description>An introduction to research in bioengineering including specific case studies and organization of this rapidly expanding and diverse field.</description>
<size>3801748047</size>
</item><item>
<title>Public Health 150E, 001 - Spring 2015 - UC Berkeley</title>
<category>Course</category>
<infohash>057e7009bdf9e3d09b1ef56ffbd82a1d1a5de23c</infohash>
<guid>https://academictorrents.com/details/057e7009bdf9e3d09b1ef56ffbd82a1d1a5de23c</guid>
<link>https://academictorrents.com/details/057e7009bdf9e3d09b1ef56ffbd82a1d1a5de23c</link>
<description/>
<size>252003485</size>
</item><item>
<title>Electrical Engineering 123, 001 - Spring 2015 - UC Berkeley</title>
<category>Course</category>
<infohash>530416f4f3a4b2cac90e61d5df72d1610dec68b4</infohash>
<guid>https://academictorrents.com/details/530416f4f3a4b2cac90e61d5df72d1610dec68b4</guid>
<link>https://academictorrents.com/details/530416f4f3a4b2cac90e61d5df72d1610dec68b4</link>
<description>Catalog Description: (4 units) Discrete time signals and systems: Fourier and Z transforms, DFT, 2-dimensional versions. Digital signal processing topics: flow graphs, realizations, FFT, quantization effects, linear prediction. Digital filter design methods: windowing, frequency sampling, S-to-Z methods, frequency-transformation methods, optimization methods, 2-dimensional filter design. Prerequisites: EECS 120, or instructor permission. Course objectives: To develop skills for analyzing and synthesizing algorithms and systems that process discrete time signals, with emphasis on realization and implementation. Why should you care? Digital signal processing is one of the most important and useful tools an electrical engineer could have. It impacts all modern aspects of life and sciences; from communication, entertainment to health and economics.</description>
<size>6058044588</size>
</item><item>
<title>Integrative Biology 131 - General Human Anatomy Online Course Videos - UCBerkeley</title>
<category>Course</category>
<infohash>5a0d6b38ab0adb9e52182164ffa8db19822f73ef</infohash>
<guid>https://academictorrents.com/details/5a0d6b38ab0adb9e52182164ffa8db19822f73ef</guid>
<link>https://academictorrents.com/details/5a0d6b38ab0adb9e52182164ffa8db19822f73ef</link>
<description>Integrative Biology 131: General Human Anatomy. Fall 2005. Professor Marian Diamond. The functional anatomy of the human body as revealed by gross and microscopic examination. The Department of Integrative Biology offers a program of instruction that focuses on the integration of structure and function in the evolution of diverse biological systems. It investigates integration at all levels of organization from molecules to the biosphere, and in all taxa of organisms from viruses to higher plants and animals.</description>
<size>8524375550</size>
</item><item>
<title>Enriched Wikilinks dataset</title>
<category>Dataset</category>
<infohash>80d0a22ed403b65f7cc0d81d51759b62c66b41ce</infohash>
<guid>https://academictorrents.com/details/80d0a22ed403b65f7cc0d81d51759b62c66b41ce</guid>
<link>https://academictorrents.com/details/80d0a22ed403b65f7cc0d81d51759b62c66b41ce</link>
<description>Extended Wikilinks dataset ExtWikilinks is a dataset, obtained from http://www.iesl.cs.umass.edu/data/wiki-links using enrichment of CoreNLP and Elasticsearch . Please see detailed description at https://github.com/generall/ExtWikilinks</description>
<size>40097448481</size>
</item><item>
<title>[Coursera] VLSI CAD: Logic to Layout (University of Illinois at Urbana-Champaign) (vlsicad)</title>
<category>Course</category>
<infohash>ec1c86afefda42f4b36c34ae7b235ef0bfd6b9d3</infohash>
<guid>https://academictorrents.com/details/ec1c86afefda42f4b36c34ae7b235ef0bfd6b9d3</guid>
<link>https://academictorrents.com/details/ec1c86afefda42f4b36c34ae7b235ef0bfd6b9d3</link>
<description/>
<size>1573125465</size>
</item><item>
<title>[Coursera] Sports and Building Aerodynamics (Eindhoven University of Technology) (spobuildaerodynamics)</title>
<category>Course</category>
<infohash>3ba301f087680f41a88224c49c01218b24de868b</infohash>
<guid>https://academictorrents.com/details/3ba301f087680f41a88224c49c01218b24de868b</guid>
<link>https://academictorrents.com/details/3ba301f087680f41a88224c49c01218b24de868b</link>
<description/>
<size>1442054298</size>
</item><item>
<title>[Coursera] Social Network Analysis (University of Michigan) (sna)</title>
<category>Course</category>
<infohash>066a55d231d3918ad3de994e6211bb99417bcdf0</infohash>
<guid>https://academictorrents.com/details/066a55d231d3918ad3de994e6211bb99417bcdf0</guid>
<link>https://academictorrents.com/details/066a55d231d3918ad3de994e6211bb99417bcdf0</link>
<description/>
<size>1085932178</size>
</item><item>
<title>[Coursera] Scientific Computing (University of Washington) (scientificcomp)</title>
<category>Course</category>
<infohash>6f7e43052129b95f470d3043cfce2bf5c15ae380</infohash>
<guid>https://academictorrents.com/details/6f7e43052129b95f470d3043cfce2bf5c15ae380</guid>
<link>https://academictorrents.com/details/6f7e43052129b95f470d3043cfce2bf5c15ae380</link>
<description/>
<size>3931061575</size>
</item><item>
<title>[Coursera] High Performance Scientific Computing (University of Washington) (scicomp)</title>
<category>Course</category>
<infohash>cb91a3d7a4c4c086be240b54e83ed8d587b31ff5</infohash>
<guid>https://academictorrents.com/details/cb91a3d7a4c4c086be240b54e83ed8d587b31ff5</guid>
<link>https://academictorrents.com/details/cb91a3d7a4c4c086be240b54e83ed8d587b31ff5</link>
<description/>
<size>2909137911</size>
</item><item>
<title>[Coursera] Recommender Systems (University of Minnesota) (recsys)</title>
<category>Course</category>
<infohash>42c3b47bf15a2a1aaadf92156f19315ad2d22967</infohash>
<guid>https://academictorrents.com/details/42c3b47bf15a2a1aaadf92156f19315ad2d22967</guid>
<link>https://academictorrents.com/details/42c3b47bf15a2a1aaadf92156f19315ad2d22967</link>
<description/>
<size>2126467983</size>
</item><item>
<title>[Coursera] Introduction to Quantum Optics (Ludwig-Maximilians-Universität München (LMU)) (qoptintro)</title>
<category>Course</category>
<infohash>b1f4d8ccee24aa956f6226607612ce8867b235a3</infohash>
<guid>https://academictorrents.com/details/b1f4d8ccee24aa956f6226607612ce8867b235a3</guid>
<link>https://academictorrents.com/details/b1f4d8ccee24aa956f6226607612ce8867b235a3</link>
<description/>
<size>792551905</size>
</item><item>
<title>[Coursera] Discrete Optimization (The University of Melbourne) (optimization)</title>
<category>Course</category>
<infohash>ed196d080a2208727a225ab5e7a5630e5bf53be4</infohash>
<guid>https://academictorrents.com/details/ed196d080a2208727a225ab5e7a5630e5bf53be4</guid>
<link>https://academictorrents.com/details/ed196d080a2208727a225ab5e7a5630e5bf53be4</link>
<description/>
<size>3667745880</size>
</item><item>
<title>[Coursera] Introduction to Natural Language Processing (University of Michigan) (nlpintro)</title>
<category>Course</category>
<infohash>78515f90de063ffc144be5e7e726c03849b4e0ed</infohash>
<guid>https://academictorrents.com/details/78515f90de063ffc144be5e7e726c03849b4e0ed</guid>
<link>https://academictorrents.com/details/78515f90de063ffc144be5e7e726c03849b4e0ed</link>
<description/>
<size>1920305125</size>
</item><item>
<title>[Coursera] Natural Language Processing (Stanford University) (nlp)</title>
<category>Course</category>
<infohash>d2c8f8f1651740520b7dfab23438d89bc8c0c0ab</infohash>
<guid>https://academictorrents.com/details/d2c8f8f1651740520b7dfab23438d89bc8c0c0ab</guid>
<link>https://academictorrents.com/details/d2c8f8f1651740520b7dfab23438d89bc8c0c0ab</link>
<description/>
<size>1342036649</size>
</item><item>
<title>[Coursera] Natural Language Processing (Columbia University) (nlangp)</title>
<category>Course</category>
<infohash>8a8f93e18dd6c46c48ee2936ed500b1ff4cc9175</infohash>
<guid>https://academictorrents.com/details/8a8f93e18dd6c46c48ee2936ed500b1ff4cc9175</guid>
<link>https://academictorrents.com/details/8a8f93e18dd6c46c48ee2936ed500b1ff4cc9175</link>
<description/>
<size>1216163367</size>
</item><item>
<title>[Coursera] Neural Networks for Machine Learning (University of Toronto) (neuralnets)</title>
<category>Course</category>
<infohash>3e6f1876bbd46780602e72f4b122329fb668bd2c</infohash>
<guid>https://academictorrents.com/details/3e6f1876bbd46780602e72f4b122329fb668bd2c</guid>
<link>https://academictorrents.com/details/3e6f1876bbd46780602e72f4b122329fb668bd2c</link>
<description/>
<size>1026708997</size>
</item><item>
<title>[Coursera] Nanotechnology: The Basics (Rice University) (nanotech)</title>
<category>Course</category>
<infohash>5d0746961dc72f9ec4a9cc4cd07b1a4e01349fb4</infohash>
<guid>https://academictorrents.com/details/5d0746961dc72f9ec4a9cc4cd07b1a4e01349fb4</guid>
<link>https://academictorrents.com/details/5d0746961dc72f9ec4a9cc4cd07b1a4e01349fb4</link>
<description/>
<size>2392742204</size>
</item><item>
<title>[Coursera] Fundamentals of Music Theory (The University of Edinburgh) (musictheory)</title>
<category>Course</category>
<infohash>8033015783d2df3e8b33a343edb8cea9a0b8319a</infohash>
<guid>https://academictorrents.com/details/8033015783d2df3e8b33a343edb8cea9a0b8319a</guid>
<link>https://academictorrents.com/details/8033015783d2df3e8b33a343edb8cea9a0b8319a</link>
<description/>
<size>1815192017</size>
</item><item>
<title>[Coursera] MRI Fundamentals (Korea Advanced Institute of Science and Technology) (mrifundamentals)</title>
<category>Course</category>
<infohash>77d3798548f7ea65943c4b08bf3167329c18fa49</infohash>
<guid>https://academictorrents.com/details/77d3798548f7ea65943c4b08bf3167329c18fa49</guid>
<link>https://academictorrents.com/details/77d3798548f7ea65943c4b08bf3167329c18fa49</link>
<description/>
<size>640275891</size>
</item><item>
<title>[Coursera] Model Thinking (University of Michigan) (modelthinking)</title>
<category>Course</category>
<infohash>05a8fe3f7e3420df6b83f40b0cbccd05e591d9f4</infohash>
<guid>https://academictorrents.com/details/05a8fe3f7e3420df6b83f40b0cbccd05e591d9f4</guid>
<link>https://academictorrents.com/details/05a8fe3f7e3420df6b83f40b0cbccd05e591d9f4</link>
<description/>
<size>2342059816</size>
</item><item>
<title>[Coursera] Mining Massive Datasets (Stanford University) (mmds)</title>
<category>Course</category>
<infohash>91bc48e6c8341de198c970acccdc87199391ab46</infohash>
<guid>https://academictorrents.com/details/91bc48e6c8341de198c970acccdc87199391ab46</guid>
<link>https://academictorrents.com/details/91bc48e6c8341de198c970acccdc87199391ab46</link>
<description/>
<size>2566408195</size>
</item><item>
<title>[Coursera] Machine Learning (Stanford University) (ml)</title>
<category>Course</category>
<infohash>e8b1f9c5bf555fe58bc73addb83457dd6da69630</infohash>
<guid>https://academictorrents.com/details/e8b1f9c5bf555fe58bc73addb83457dd6da69630</guid>
<link>https://academictorrents.com/details/e8b1f9c5bf555fe58bc73addb83457dd6da69630</link>
<description/>
<size>1590091788</size>
</item><item>
<title>[Coursera] Mathematical Methods for Quantitative Finance (University of Washington) (mathematicalmethods)</title>
<category>Course</category>
<infohash>de1360e53beb1ee13c3285af9bb232109fa168a1</infohash>
<guid>https://academictorrents.com/details/de1360e53beb1ee13c3285af9bb232109fa168a1</guid>
<link>https://academictorrents.com/details/de1360e53beb1ee13c3285af9bb232109fa168a1</link>
<description/>
<size>1680043429</size>
</item><item>
<title>[Coursera] Machine Learning (University of Washington) (machlearning)</title>
<category>Course</category>
<infohash>0cdba976d648fbe322133833323491ebf8b34340</infohash>
<guid>https://academictorrents.com/details/0cdba976d648fbe322133833323491ebf8b34340</guid>
<link>https://academictorrents.com/details/0cdba976d648fbe322133833323491ebf8b34340</link>
<description/>
<size>5649312721</size>
</item><item>
<title>[Coursera] Logic: Language and Information 1 (The University of Melbourne) (logic1)</title>
<category>Course</category>
<infohash>aa0902f96da833cc7f4ec70859a529276f5032d9</infohash>
<guid>https://academictorrents.com/details/aa0902f96da833cc7f4ec70859a529276f5032d9</guid>
<link>https://academictorrents.com/details/aa0902f96da833cc7f4ec70859a529276f5032d9</link>
<description/>
<size>861916429</size>
</item><item>
<title>[Coursera] Introduction to Philosophy (The University of Edinburgh) (introphil)</title>
<category>Course</category>
<infohash>8dcd401c1b3db696fc2f04d3b49c850f0e5cc309</infohash>
<guid>https://academictorrents.com/details/8dcd401c1b3db696fc2f04d3b49c850f0e5cc309</guid>
<link>https://academictorrents.com/details/8dcd401c1b3db696fc2f04d3b49c850f0e5cc309</link>
<description/>
<size>803543873</size>
</item><item>
<title>[Coursera] Introduction to Logic (Stanford University) (intrologic)</title>
<category>Course</category>
<infohash>0342ad0bd7ef06eb1500b5e7c8ef398060827c4e</infohash>
<guid>https://academictorrents.com/details/0342ad0bd7ef06eb1500b5e7c8ef398060827c4e</guid>
<link>https://academictorrents.com/details/0342ad0bd7ef06eb1500b5e7c8ef398060827c4e</link>
<description/>
<size>258934094</size>
</item><item>
<title>[Coursera] The Hardware/Software Interface (University of Washington) (hwswinterface)</title>
<category>Course</category>
<infohash>b63a566df824b39740eb9754e4fe4c0140306f4b</infohash>
<guid>https://academictorrents.com/details/b63a566df824b39740eb9754e4fe4c0140306f4b</guid>
<link>https://academictorrents.com/details/b63a566df824b39740eb9754e4fe4c0140306f4b</link>
<description/>
<size>1435755062</size>
</item><item>
<title>[Coursera] Heterogeneous Parallel Programming (University of Illinois at Urbana-Champaign) (hetero)</title>
<category>Course</category>
<infohash>de34574326abc4666c7ede41d0205a4a2129bf85</infohash>
<guid>https://academictorrents.com/details/de34574326abc4666c7ede41d0205a4a2129bf85</guid>
<link>https://academictorrents.com/details/de34574326abc4666c7ede41d0205a4a2129bf85</link>
<description/>
<size>1476786727</size>
</item><item>
<title>[Coursera] Genomic and Precision Medicine (University of California, San Francisco) (genomicmedicine)</title>
<category>Course</category>
<infohash>c4b2f1ba8fb0ee0b8e49609d2b9c86efae36dc36</infohash>
<guid>https://academictorrents.com/details/c4b2f1ba8fb0ee0b8e49609d2b9c86efae36dc36</guid>
<link>https://academictorrents.com/details/c4b2f1ba8fb0ee0b8e49609d2b9c86efae36dc36</link>
<description/>
<size>1053056853</size>
</item><item>
<title>[Coursera] Genes and the Human Condition (From Behavior to Biotechnology) (University of Maryland, College Park) (genes)</title>
<category>Course</category>
<infohash>5aecd4978d7be3568f454fa6384dcd5683564f63</infohash>
<guid>https://academictorrents.com/details/5aecd4978d7be3568f454fa6384dcd5683564f63</guid>
<link>https://academictorrents.com/details/5aecd4978d7be3568f454fa6384dcd5683564f63</link>
<description/>
<size>1880356360</size>
</item><item>
<title>[Coursera] Game Theory II: Advanced Applications (Stanford University) (gametheory2)</title>
<category>Course</category>
<infohash>fc641711af26384dbace511fa236a6e4dcb9f36c</infohash>
<guid>https://academictorrents.com/details/fc641711af26384dbace511fa236a6e4dcb9f36c</guid>
<link>https://academictorrents.com/details/fc641711af26384dbace511fa236a6e4dcb9f36c</link>
<description/>
<size>2248464381</size>
</item><item>
<title>[Coursera] Game Theory (Stanford University &amp; The University of British Columbia) (gametheory)</title>
<category>Course</category>
<infohash>7b96f8f76c4af7752f35c4fc26607cf50b6bb195</infohash>
<guid>https://academictorrents.com/details/7b96f8f76c4af7752f35c4fc26607cf50b6bb195</guid>
<link>https://academictorrents.com/details/7b96f8f76c4af7752f35c4fc26607cf50b6bb195</link>
<description/>
<size>2989983315</size>
</item><item>
<title>[Coursera] Exploring Quantum Physics (University of Maryland, College Park) (eqp)</title>
<category>Course</category>
<infohash>5261e17c70036651d1f83a6ca66c399da33bb46e</infohash>
<guid>https://academictorrents.com/details/5261e17c70036651d1f83a6ca66c399da33bb46e</guid>
<link>https://academictorrents.com/details/5261e17c70036651d1f83a6ca66c399da33bb46e</link>
<description/>
<size>1740505128</size>
</item><item>
<title>[Coursera] Understanding Einstein: The Special Theory of Relativity (Stanford University) (einstein)</title>
<category>Course</category>
<infohash>be19083019ae3954680733d394e5e5b5b3572a15</infohash>
<guid>https://academictorrents.com/details/be19083019ae3954680733d394e5e5b5b3572a15</guid>
<link>https://academictorrents.com/details/be19083019ae3954680733d394e5e5b5b3572a15</link>
<description/>
<size>8608087384</size>
</item><item>
<title>[Coursera] Principles of Economics for Scientists (Caltech) (econ1scientists)</title>
<category>Course</category>
<infohash>a0af8db9e7b924993ee017e492195b3eb1b0a78b</infohash>
<guid>https://academictorrents.com/details/a0af8db9e7b924993ee017e492195b3eb1b0a78b</guid>
<link>https://academictorrents.com/details/a0af8db9e7b924993ee017e492195b3eb1b0a78b</link>
<description/>
<size>1719455950</size>
</item><item>
<title>[Coursera] Digital Signal Processing (École Polytechnique Fédérale de Lausanne) (dsp)</title>
<category>Course</category>
<infohash>43d881e5128841876104742314ccd9851901f460</infohash>
<guid>https://academictorrents.com/details/43d881e5128841876104742314ccd9851901f460</guid>
<link>https://academictorrents.com/details/43d881e5128841876104742314ccd9851901f460</link>
<description/>
<size>1467181461</size>
</item><item>
<title>[Coursera] Introduction to Data Science (University of Washington) (datasci)</title>
<category>Course</category>
<infohash>1448261dd6932e549ba4a86b5d6750aae858d003</infohash>
<guid>https://academictorrents.com/details/1448261dd6932e549ba4a86b5d6750aae858d003</guid>
<link>https://academictorrents.com/details/1448261dd6932e549ba4a86b5d6750aae858d003</link>
<description/>
<size>1463983891</size>
</item><item>
<title>[Coursera] Galaxies and Cosmology (Caltech) (cosmo)</title>
<category>Course</category>
<infohash>b7d15931742718f330243dc0aeb110136c86359f</infohash>
<guid>https://academictorrents.com/details/b7d15931742718f330243dc0aeb110136c86359f</guid>
<link>https://academictorrents.com/details/b7d15931742718f330243dc0aeb110136c86359f</link>
<description/>
<size>1841765433</size>
</item><item>
<title>[Coursera] Computational Neuroscience (University of Washington) (compneuro)</title>
<category>Course</category>
<infohash>d180bcd510aeec3a20044a0946ac658b9ab30760</infohash>
<guid>https://academictorrents.com/details/d180bcd510aeec3a20044a0946ac658b9ab30760</guid>
<link>https://academictorrents.com/details/d180bcd510aeec3a20044a0946ac658b9ab30760</link>
<description/>
<size>1000642215</size>
</item><item>
<title>[Coursera] Computational Methods for Data Analysis (University of Washington) (compmethods)</title>
<category>Course</category>
<infohash>4281ef52a65d26489e686a0540d86abd4161b88e</infohash>
<guid>https://academictorrents.com/details/4281ef52a65d26489e686a0540d86abd4161b88e</guid>
<link>https://academictorrents.com/details/4281ef52a65d26489e686a0540d86abd4161b88e</link>
<description/>
<size>3749140620</size>
</item><item>
<title>[Coursera] Compilers (Stanford University) (compilers)</title>
<category>Course</category>
<infohash>b7579be97c2f01e4efadb0b6b06f0d071afeaac9</infohash>
<guid>https://academictorrents.com/details/b7579be97c2f01e4efadb0b6b06f0d071afeaac9</guid>
<link>https://academictorrents.com/details/b7579be97c2f01e4efadb0b6b06f0d071afeaac9</link>
<description/>
<size>1318156391</size>
</item><item>
<title>[Coursera] Introduction to Computational Finance and Financial Econometrics (University of Washington) (compfinance)</title>
<category>Course</category>
<infohash>f07203f2eedb4792c351ba0e28406dab9ab54d7d</infohash>
<guid>https://academictorrents.com/details/f07203f2eedb4792c351ba0e28406dab9ab54d7d</guid>
<link>https://academictorrents.com/details/f07203f2eedb4792c351ba0e28406dab9ab54d7d</link>
<description/>
<size>4153499620</size>
</item><item>
<title>[Coursera] Computer Architecture (Princeton University) (comparch)</title>
<category>Course</category>
<infohash>10d1bf7161a1b3ea70697cd61834ceea6c3d1f87</infohash>
<guid>https://academictorrents.com/details/10d1bf7161a1b3ea70697cd61834ceea6c3d1f87</guid>
<link>https://academictorrents.com/details/10d1bf7161a1b3ea70697cd61834ceea6c3d1f87</link>
<description/>
<size>3538349960</size>
</item><item>
<title>[Coursera] Bitcoin and Cryptocurrency Technologies (Princeton University) (bitcointech)</title>
<category>Course</category>
<infohash>412d52b0bfcf2a8bf3201a28c2ba04b6dff5b290</infohash>
<guid>https://academictorrents.com/details/412d52b0bfcf2a8bf3201a28c2ba04b6dff5b290</guid>
<link>https://academictorrents.com/details/412d52b0bfcf2a8bf3201a28c2ba04b6dff5b290</link>
<description/>
<size>1898886107</size>
</item><item>
<title>[Coursera] Bioinformatics: Life Sciences on Your Computer (Johns Hopkins University) (bioinform)</title>
<category>Course</category>
<infohash>b02188bbb764f7f5fdd499c5144add35f56ed3e7</infohash>
<guid>https://academictorrents.com/details/b02188bbb764f7f5fdd499c5144add35f56ed3e7</guid>
<link>https://academictorrents.com/details/b02188bbb764f7f5fdd499c5144add35f56ed3e7</link>
<description/>
<size>498196877</size>
</item><item>
<title>[Coursera] The Caltech-JPL Summer School on Big Data Analytics (Caltech) (bigdataschool)</title>
<category>Course</category>
<infohash>71268d279bcfcd9d88c8989c72158d8d73a2e2fc</infohash>
<guid>https://academictorrents.com/details/71268d279bcfcd9d88c8989c72158d8d73a2e2fc</guid>
<link>https://academictorrents.com/details/71268d279bcfcd9d88c8989c72158d8d73a2e2fc</link>
<description/>
<size>2179056887</size>
</item><item>
<title>[Coursera] Web Intelligence and Big Data (Indian Institute of Technology Delhi) (bigdata)</title>
<category>Course</category>
<infohash>8921ec4d2076e8a6e56a2387d5157aa7c0ef7f10</infohash>
<guid>https://academictorrents.com/details/8921ec4d2076e8a6e56a2387d5157aa7c0ef7f10</guid>
<link>https://academictorrents.com/details/8921ec4d2076e8a6e56a2387d5157aa7c0ef7f10</link>
<description/>
<size>1186092720</size>
</item><item>
<title>[Coursera] Automata (Stanford University) (automata)</title>
<category>Course</category>
<infohash>459e24d28a6abce04cc9fd6e9a148c86dcaac19c</infohash>
<guid>https://academictorrents.com/details/459e24d28a6abce04cc9fd6e9a148c86dcaac19c</guid>
<link>https://academictorrents.com/details/459e24d28a6abce04cc9fd6e9a148c86dcaac19c</link>
<description/>
<size>876348234</size>
</item><item>
<title>[Coursera] Astrobiology and the Search for Extraterrestrial Life (The University of Edinburgh) (astrobio)</title>
<category>Course</category>
<infohash>47d9877fa4f33d109721c65d066a26c3c5e12e0d</infohash>
<guid>https://academictorrents.com/details/47d9877fa4f33d109721c65d066a26c3c5e12e0d</guid>
<link>https://academictorrents.com/details/47d9877fa4f33d109721c65d066a26c3c5e12e0d</link>
<description/>
<size>827700128</size>
</item><item>
<title>[Coursera] Analysis of Algorithms (Princeton University) (aofa)</title>
<category>Course</category>
<infohash>31939517fc774120c37160f93a9b5c73cf6c3271</infohash>
<guid>https://academictorrents.com/details/31939517fc774120c37160f93a9b5c73cf6c3271</guid>
<link>https://academictorrents.com/details/31939517fc774120c37160f93a9b5c73cf6c3271</link>
<description/>
<size>2028617535</size>
</item><item>
<title>[Coursera] Algorithms, Part II (Princeton University) (algs4partII)</title>
<category>Course</category>
<infohash>5c22cd3a2f65f18e153faefda9730b51b21f6521</infohash>
<guid>https://academictorrents.com/details/5c22cd3a2f65f18e153faefda9730b51b21f6521</guid>
<link>https://academictorrents.com/details/5c22cd3a2f65f18e153faefda9730b51b21f6521</link>
<description/>
<size>1656820952</size>
</item><item>
<title>[Coursera] Algorithms, Part I (Princeton University) (algs4partI)</title>
<category>Course</category>
<infohash>43534d22aea22778efce768c4304d6809fa58e6b</infohash>
<guid>https://academictorrents.com/details/43534d22aea22778efce768c4304d6809fa58e6b</guid>
<link>https://academictorrents.com/details/43534d22aea22778efce768c4304d6809fa58e6b</link>
<description/>
<size>1205403640</size>
</item><item>
<title>[Coursera] Algorithms: Design and Analysis, Part 2 (Stanford University) (algo2)</title>
<category>Course</category>
<infohash>e24c15ce89cac9c380284595d1d8a475cb485e28</infohash>
<guid>https://academictorrents.com/details/e24c15ce89cac9c380284595d1d8a475cb485e28</guid>
<link>https://academictorrents.com/details/e24c15ce89cac9c380284595d1d8a475cb485e28</link>
<description/>
<size>1984809211</size>
</item><item>
<title>[Coursera] Algorithms: Design and Analysis, Part 1 (Stanford University) (algo)</title>
<category>Course</category>
<infohash>7bfcfbaf2c53588b23ba1ebccae47a2b9c5197b7</infohash>
<guid>https://academictorrents.com/details/7bfcfbaf2c53588b23ba1ebccae47a2b9c5197b7</guid>
<link>https://academictorrents.com/details/7bfcfbaf2c53588b23ba1ebccae47a2b9c5197b7</link>
<description/>
<size>1948909326</size>
</item><item>
<title>[Coursera] Artificial Intelligence Planning (The University of Edinburgh) (aiplan)</title>
<category>Course</category>
<infohash>560d07faaf09f640fea96b3650874e2903cbc639</infohash>
<guid>https://academictorrents.com/details/560d07faaf09f640fea96b3650874e2903cbc639</guid>
<link>https://academictorrents.com/details/560d07faaf09f640fea96b3650874e2903cbc639</link>
<description/>
<size>1038676774</size>
</item><item>
<title>Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia (Extended Dataset)</title>
<category>Dataset</category>
<infohash>689af6f153e097538ad7b8fd4ea3e87ce8f6bc42</infohash>
<guid>https://academictorrents.com/details/689af6f153e097538ad7b8fd4ea3e87ce8f6bc42</guid>
<link>https://academictorrents.com/details/689af6f153e097538ad7b8fd4ea3e87ce8f6bc42</link>
<description>Cross-document coreference resolution is the task of grouping the entity mentions in a collection of documents into sets that each represent a distinct entity. It is central to knowledge base construction and also useful for joint inference with other NLP components. Obtaining large, organic labeled datasets for training and testing cross-document coreference has previously been difficult. We use a method for automatically gathering massive amounts of naturally-occurring cross-document reference data to create the Wikilinks dataset comprising of 40 million mentions over 3 million entities. Our method is based on finding hyperlinks to Wikipedia from a web crawl and using anchor text as mentions. In addition to providing large-scale labeled data without human effort, we are able to include many styles of text beyond newswire and many entity types beyond people. ### Introduction The Wikipedia links (WikiLinks) data consists of web pages that satisfy the following two constraints: a. contain at least one hyperlink that points to Wikipedia, and b. the anchor text of that hyperlink closely matches the title of the target Wikipedia page. We treat each page on Wikipedia as representing an entity (or concept or idea), and the anchor text as a mention of the entity. The WikiLinks data set was obtained by iterating over Google s web index. ####  Content This dataset is accompanied by the following tech report: https://web.cs.umass.edu/publication/docs/2012/UM-CS-2012-015.pdf Please cite the above report if you use this data. The dataset is divided over 10 gzipped text files data-0000[0-9]-of-00010.gz. Each of these files can be viewed without uncompressing them using zcat. For example: zcat data-00001-of-00010.gz | head gives: URL	ftp://217.219.170.14/Computer%20Group/Faani/vaset%20fani/second/sattari/word/2007/source/s%20crt.docx MENTION	vacuum tube	421	http://en.wikipedia.org/wiki/Vacuum_tube MENTION	vacuum tubes	10838	http://en.wikipedia.org/wiki/Vacuum_tube MENTION	electron gun	598	http://en.wikipedia.org/wiki/Electron_gun MENTION	fluorescent	790	http://en.wikipedia.org/wiki/Fluorescent MENTION	oscilloscope	1307	http://en.wikipedia.org/wiki/Oscilloscope MENTION	computer monitor	1503	http://en.wikipedia.org/wiki/Computer_monitor MENTION	computer monitors	3066	http://en.wikipedia.org/wiki/Computer_monitor MENTION	radar	1657	http://en.wikipedia.org/wiki/Radar MENTION	plasma screens	2162	http://en.wikipedia.org/wiki/Plasma_screen Each file is in the following format: &amp;mdash;&amp;mdash;&amp;mdash;- URL\t&lt;url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n ... TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n ... \n\n URL\t&lt;url&gt;\n ... where each web-page is identified by its url (annotated by "URL"). For every mention (denoted by "MENTION"), we provide the actual mention string, the byte offset of the mention from the start of the page and the target url all separated by a tab. It is possible (and in many cases very likely) that the contents of a web-page may change over time. The dataset also contains information about the top 10 least frequent tokens on that page at the time it was crawled. These line started with a "TOKEN" and contain the string of the token and the byte offset from the start of the page. These token strings can be used as fingerprints to verify if the page used to generate the data has changed. Finally, pages are separated from each other by two blank lines. ####  Basic Statistics Number of Document: 11 million Number of entities:  3 million Number of mentions: 40 million Finally please note that this dataset was created automatically from the web and therefore contains some amount of noise. Enjoy! Amar Subramanya (asubram@google.com) Sameer Singh (sameer@cs.umass.edu) Fernando Pereira (pereira@google.com) Andrew McCallum (mccallum@cs.umass.edu)</description>
<size>194817430579</size>
</item><item>
<title>Wikilinks: A Large-scale Cross-Document Coreference Corpus Labeled via Links to Wikipedia (Original Dataset)</title>
<category>Dataset</category>
<infohash>beefa2ec4161432cd1d9f693a88d3670aae68357</infohash>
<guid>https://academictorrents.com/details/beefa2ec4161432cd1d9f693a88d3670aae68357</guid>
<link>https://academictorrents.com/details/beefa2ec4161432cd1d9f693a88d3670aae68357</link>
<description>Cross-document coreference resolution is the task of grouping the entity mentions in a collection of documents into sets that each represent a distinct entity. It is central to knowledge base construction and also useful for joint inference with other NLP components. Obtaining large, organic labeled datasets for training and testing cross-document coreference has previously been difficult. We use a method for automatically gathering massive amounts of naturally-occurring cross-document reference data to create the Wikilinks dataset comprising of 40 million mentions over 3 million entities. Our method is based on finding hyperlinks to Wikipedia from a web crawl and using anchor text as mentions. In addition to providing large-scale labeled data without human effort, we are able to include many styles of text beyond newswire and many entity types beyond people. ### Introduction The Wikipedia links (WikiLinks) data consists of web pages that satisfy the following two constraints: a. contain at least one hyperlink that points to Wikipedia, and b. the anchor text of that hyperlink closely matches the title of the target Wikipedia page. We treat each page on Wikipedia as representing an entity (or concept or idea), and the anchor text as a mention of the entity. The WikiLinks data set was obtained by iterating over Google s web index. ####  Content This dataset is accompanied by the following tech report: https://web.cs.umass.edu/publication/docs/2012/UM-CS-2012-015.pdf Please cite the above report if you use this data. The dataset is divided over 10 gzipped text files data-0000[0-9]-of-00010.gz. Each of these files can be viewed without uncompressing them using zcat. For example: zcat data-00001-of-00010.gz | head gives: URL	ftp://217.219.170.14/Computer%20Group/Faani/vaset%20fani/second/sattari/word/2007/source/s%20crt.docx MENTION	vacuum tube	421	http://en.wikipedia.org/wiki/Vacuum_tube MENTION	vacuum tubes	10838	http://en.wikipedia.org/wiki/Vacuum_tube MENTION	electron gun	598	http://en.wikipedia.org/wiki/Electron_gun MENTION	fluorescent	790	http://en.wikipedia.org/wiki/Fluorescent MENTION	oscilloscope	1307	http://en.wikipedia.org/wiki/Oscilloscope MENTION	computer monitor	1503	http://en.wikipedia.org/wiki/Computer_monitor MENTION	computer monitors	3066	http://en.wikipedia.org/wiki/Computer_monitor MENTION	radar	1657	http://en.wikipedia.org/wiki/Radar MENTION	plasma screens	2162	http://en.wikipedia.org/wiki/Plasma_screen Each file is in the following format: &amp;mdash;&amp;mdash;&amp;mdash;- URL\t&lt;url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n MENTION\t&lt;mention&gt;\t&lt;byte_offset&gt;\t&lt;target_url&gt;\n ... TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n TOKEN\t&lt;token&gt;\t&lt;byte_offset&gt;\n ... \n\n URL\t&lt;url&gt;\n ... where each web-page is identified by its url (annotated by "URL"). For every mention (denoted by "MENTION"), we provide the actual mention string, the byte offset of the mention from the start of the page and the target url all separated by a tab. It is possible (and in many cases very likely) that the contents of a web-page may change over time. The dataset also contains information about the top 10 least frequent tokens on that page at the time it was crawled. These line started with a "TOKEN" and contain the string of the token and the byte offset from the start of the page. These token strings can be used as fingerprints to verify if the page used to generate the data has changed. Finally, pages are separated from each other by two blank lines. ####  Basic Statistics Number of Document: 11 million Number of entities:  3 million Number of mentions: 40 million Finally please note that this dataset was created automatically from the web and therefore contains some amount of noise. Enjoy! Amar Subramanya (asubram@google.com) Sameer Singh (sameer@cs.umass.edu) Fernando Pereira (pereira@google.com) Andrew McCallum (mccallum@cs.umass.edu)</description>
<size>1837946933</size>
</item><item>
<title>Open Payments Dataset - 2015 Program Year </title>
<category>Dataset</category>
<infohash>de413718a03cd670535c772cf68116775a9e2537</infohash>
<guid>https://academictorrents.com/details/de413718a03cd670535c772cf68116775a9e2537</guid>
<link>https://academictorrents.com/details/de413718a03cd670535c772cf68116775a9e2537</link>
<description>Every year, CMS will update the Open Payments data at least once after its initial publication. The refreshed data will include updates to data disputes and other data corrections made since the initial publication of this data documenting payments or transfers of value to physicians and teaching hospitals, and physician ownership and investment interests. This financial data is submitted by applicable manufacturers and applicable group purchasing organizations (GPOs). #### What data is collected? Applicable manufacturers and GPOs submit data to Open Payments about payments or other transfers of value between applicable manufacturers and GPOs and physicians or teaching hospitals: 1. Paid directly to physicians and teaching hospitals (known as direct payments) 2. Paid indirectly to physicians and teaching hospitals (known as indirect payments) through an intermediary such as a medical specialty society 3. Designated by physicians or teaching hospitals to be paid to another party (known as third party payments) There are three distinct ways for you to review and search the data (and remember, you can view the summary data dashboard for an overview of published data): The Open Payments Final Rule §403.910 provides applicable manufacturers and applicable GPO s the opportunity to request a delay in publication for a period not to exceed four calendar years after the date the payment or other transfer of value was made, or upon the approval, licensure or clearance of the covered drug, device, biological, or medical supply by the FDA.</description>
<size>584875180</size>
</item><item>
<title>Open Payments Dataset - 2014 Program Year </title>
<category>Dataset</category>
<infohash>88f6fff84d7c2a2769348ab4c2b0ecb318b43752</infohash>
<guid>https://academictorrents.com/details/88f6fff84d7c2a2769348ab4c2b0ecb318b43752</guid>
<link>https://academictorrents.com/details/88f6fff84d7c2a2769348ab4c2b0ecb318b43752</link>
<description>Every year, CMS will update the Open Payments data at least once after its initial publication. The refreshed data will include updates to data disputes and other data corrections made since the initial publication of this data documenting payments or transfers of value to physicians and teaching hospitals, and physician ownership and investment interests. This financial data is submitted by applicable manufacturers and applicable group purchasing organizations (GPOs). #### What data is collected? Applicable manufacturers and GPOs submit data to Open Payments about payments or other transfers of value between applicable manufacturers and GPOs and physicians or teaching hospitals: 1. Paid directly to physicians and teaching hospitals (known as direct payments) 2. Paid indirectly to physicians and teaching hospitals (known as indirect payments) through an intermediary such as a medical specialty society 3. Designated by physicians or teaching hospitals to be paid to another party (known as third party payments) There are three distinct ways for you to review and search the data (and remember, you can view the summary data dashboard for an overview of published data): The Open Payments Final Rule §403.910 provides applicable manufacturers and applicable GPO s the opportunity to request a delay in publication for a period not to exceed four calendar years after the date the payment or other transfer of value was made, or upon the approval, licensure or clearance of the covered drug, device, biological, or medical supply by the FDA.</description>
<size>728444845</size>
</item><item>
<title>Open Payments Dataset - 2013 Program Year </title>
<category>Dataset</category>
<infohash>92a1aeaaf741f3d1669ad0f0186d96ec168ee550</infohash>
<guid>https://academictorrents.com/details/92a1aeaaf741f3d1669ad0f0186d96ec168ee550</guid>
<link>https://academictorrents.com/details/92a1aeaaf741f3d1669ad0f0186d96ec168ee550</link>
<description>Every year, CMS will update the Open Payments data at least once after its initial publication. The refreshed data will include updates to data disputes and other data corrections made since the initial publication of this data documenting payments or transfers of value to physicians and teaching hospitals, and physician ownership and investment interests. This financial data is submitted by applicable manufacturers and applicable group purchasing organizations (GPOs). #### What data is collected? Applicable manufacturers and GPOs submit data to Open Payments about payments or other transfers of value between applicable manufacturers and GPOs and physicians or teaching hospitals: 1. Paid directly to physicians and teaching hospitals (known as direct payments) 2. Paid indirectly to physicians and teaching hospitals (known as indirect payments) through an intermediary such as a medical specialty society 3. Designated by physicians or teaching hospitals to be paid to another party (known as third party payments) There are three distinct ways for you to review and search the data (and remember, you can view the summary data dashboard for an overview of published data): The Open Payments Final Rule §403.910 provides applicable manufacturers and applicable GPO s the opportunity to request a delay in publication for a period not to exceed four calendar years after the date the payment or other transfer of value was made, or upon the approval, licensure or clearance of the covered drug, device, biological, or medical supply by the FDA.</description>
<size>277982372</size>
</item><item>
<title>Design of a hydraulic dexterous manipulator for minimally invasive surgery</title>
<category>Paper</category>
<infohash>6e839ebb0394250327729c7f5e3b000061c71bfe</infohash>
<guid>https://academictorrents.com/details/6e839ebb0394250327729c7f5e3b000061c71bfe</guid>
<link>https://academictorrents.com/details/6e839ebb0394250327729c7f5e3b000061c71bfe</link>
<description>The research described here identifies the limitations of existing robotic surgical platforms, which include the balance between the scale of the robot and its manipulability in terms of range of motion, load capacity, and tool capability, then develops a means of overcoming them by taking advantage of fluid power as an enabling technology with its inherent power density and controllability. The approach described here differs significantly from conventional surgical robots in that the robot is embedded within the surgical device itself, whereas in the conventional system, a general-purpose robot is used to manipulate various surgical tools. This is done in order to demonstrate that fluid power can be used advantageously for the design of embedded surgical robotic systems for minimally invasive surgery. To enable the design of a fluid powered surgical robot, it was first necessary to identify the design requirements for a robot of this nature as well as the considerations unique to this approach. To this end, a quantification of the necessary load capacity for natural orifice robots was conducted. Further, through a review of the literature in the fields of surgery and robotics, considerations of necessary workspace and limitations for the prevention of tissue damage were explored. The results of these analyses are presented. The technologies that comprise this novel surgical robotic system include a hydraulic control valve, actuation units, and an enabling structure. The intended application of these technologies introduced numerous limitations and challenges to the design process. The most stringent of these limitations was that of overall size, due to the realities of patient anatomy, which prevented the use of commercially available hydraulic components. An assemblage of components to achieve the aforementioned design requirements is described including the design of a novel hydraulic control valve to enable manipulation of three actuators using a single valve sized to fit within the working channel of a surgical endoscope. The advantage of the described approach is that the device enables greater miniaturization, improves cost effectiveness, and has better ease of mobility. The mobility and the relaxed requirements for operating room cleanliness can be potentially useful for mobile clinics, out-patient clinical settings, and on the battlefield. Being more cost effective and having a small overall size, the robotic assisted surgical devices can be widely deployed, even in rural or other less technology intensive environments. Through careful review of the literature and analytical evaluation of the various proposed concepts, it was possible to arrive at a design that meets the needs of modern surgical interventions while addressing the perceived limitations of existing surgical robotics. Through the efforts described in this dissertation, much new information was produced and developments resulted. The considerations of hydraulic power for surgical robots were evaluated and are applicable to other surgical tasks where hydraulic power may be used advantageously. A quantification of the load requirements for surgical robots performing abdominal procedures was produced which will provide a guide for other researchers developing surgical robots. These values are difficult to find in the literature and are a valuable resource for the field. An alternative, simplified model for predicting the behavior of continuum beams under load was developed to provide an inverse formulation for computing beam shape and end loads. This is useful as continuum beams are widely used for minimally invasive surgical manipulators as well as in a wide variety of other applications. Finally, a novel valve concept and two possible designs realizing this concept were developed. These valve designs facilitate control over the three actuators in an antagonistic arrangement. Further, the valve designs enable proportional control of the three actuators at a size scale not commercially available. In summary, the design of a novel hydraulic surgical manipulator as a summation of its parts has been performed. This design demonstrates the feasibility of the fluid power approach to embedded minimally invasive surgical robotics. The pursuit of this research has provided many unique challenges and the work presented here has addressed many of them, as well as laid the foundation for future developments in the application of hydraulic power to the growing field of surgical robotics for minimally invasive surgery.</description>
<size>4199256</size>
</item><item>
<title>Experiences with inquiry-based learning in an introductory mechanics course</title>
<category>Paper</category>
<infohash>dabacc8d81e9a3eb10328999896f69591a9f6ecf</infohash>
<guid>https://academictorrents.com/details/dabacc8d81e9a3eb10328999896f69591a9f6ecf</guid>
<link>https://academictorrents.com/details/dabacc8d81e9a3eb10328999896f69591a9f6ecf</link>
<description>Inquiry-based learning is an educational approach that allows the student to take ownership over the education process by self-identifying a problem and formulating their own solution. The application of this method of teaching was explored in an introductory mechanics course taken by both engineering and engineering technology students. Students were tasked with applying the principles of fundamental static equilibrium analysis to objects found in their normal surroundings. The deliverable for this assignment consisted of a photograph of an object they found to be in static equilibrium and a short description of how the state of the object could be described mathematically. Student submissions for this task exhibited a wide range of quality and imagination. Examples of student work are presented along with discussion of lessons learned and recommendations for the use of this method in the future. The overall student response to this task was positive and thus these efforts will be expanded.</description>
<size>425664</size>
</item><item>
<title>Evaluation of student learning outcomes due to self-guided engineering analysis of surroundings</title>
<category>Paper</category>
<infohash>64cba143eb0ef70ae025a66d74e8b83af30bb7f2</infohash>
<guid>https://academictorrents.com/details/64cba143eb0ef70ae025a66d74e8b83af30bb7f2</guid>
<link>https://academictorrents.com/details/64cba143eb0ef70ae025a66d74e8b83af30bb7f2</link>
<description>Inquiry-based learning is an educational approach that allows the student to take ownership over the education process by self-identifying a problem and formulating their own solution. The application of this method of teaching was explored in an introductory mechanics course taken by both engineering and engineering technology students.Students were tasked with applying the principles of fundamental engineering analysis to objects found in their normal surroundings over the course of the semester. By asking students to complete assignments where they had to apply engineering analysis to an everyday object, it was intended for the student to look beyond their textbook and relate the course material to their surroundings. Similar work by others has demonstrated success in getting students to make the connection between the classroom and the “real world”.A preliminary study was conducted using this concept for a single assignment involving static equilibrium in the same course. Through this effort it was revealed that, in general, students enjoyed completing the assignment and the ensuing class discussion was more fruitful than with other course topics. As a result, the concept was adopted more fully into the course with multiple assignments throughout the semester.The deliverables for these assignments consisted of either a photograph or a sketch of an object that demonstrated the concepts relevant to the week’s course material accompanied by a brief description of what was depicted in the photograph or sketch. Examples of students’ work will be presented along with discussion of lessons learned and recommendations for the use of this method in the future. Evaluation of student learning outcomes will be conducted through the issuance of pre- and post-assessments using the Concept Assessment Tool for Statics as well as performance on course examinations. Comparisons will be made between a treatment group,which was subject to the analysis assignments, and a control group, which did not complete the analysis assignments.</description>
<size>3938107</size>
</item><item>
<title>The relationship between class size and active Twitter participation in the engineering classroom</title>
<category>Paper</category>
<infohash>bc7109f293cb0f2a06d304260da138efa8e43a7e</infohash>
<guid>https://academictorrents.com/details/bc7109f293cb0f2a06d304260da138efa8e43a7e</guid>
<link>https://academictorrents.com/details/bc7109f293cb0f2a06d304260da138efa8e43a7e</link>
<description>The use of Twitter in the higher education classroom has expanded in recent years as educators come to realize the benefits of social media use as a tool for inter-student communication.Further benefits have been found in relation to asking students to communicate the content of a given course to a broader, general public audience. However, at the same time it can be a challenge to promote active participation in this sort of activity due to students’ apprehension about putting themselves out there and being wrong. One hypothesis is that this can be overcome by employing a larger cohort of participants thus creating a sense of anonymity through presence within a large population. Further, the use of a larger participant pool increases the odds of it containing students who are willing to drive the online classroom discussion through their participation. It is expected that the presence of such individuals lowers the barrier to entry for the rest of the students.These questions were explored over a multi-semester study of student participation in directed Twitter discussions within an engineering mechanics classroom. First, a small cohort of students was used and later the same study was conducted with a large cohort of students. Comparisons will be made between these two cohorts on the basis of active engagement in the assigned tasks,course performance, and student perception of the tasks.As part of the study, students were tasked with applying the principles of fundamental engineering analysis to objects found in their normal surroundings over the course of the semester. By asking students to complete assignments where they had to apply engineering analysis to an everyday object, it was intended for the student to look beyond their textbook and relate the course material to their surroundings. Similar work by others has demonstrated success in getting students to make the connection between the classroom and the “real world”.The deliverables for these assignments consist of either a photograph, video, or written description of an object or event that demonstrates the concepts relevant to the week’s course material. Examples of students’ work will be presented along with discussion of lessons learned and recommendations for the use of this method in the future. Evaluation of student learning outcomes will be conducted through the issuance of pre- and post-assessments using the Concept Assessment Tool for Statics as well as performance on course examinations. Comparisons will be made between the small cohort and large cohort groups.</description>
<size>471293</size>
</item><item>
<title>Use of a Rube Goldberg design project for engineering dynamics</title>
<category>Paper</category>
<infohash>c6c09f4c5d8c45018d5692e8e3e20fc51b3fe1d4</infohash>
<guid>https://academictorrents.com/details/c6c09f4c5d8c45018d5692e8e3e20fc51b3fe1d4</guid>
<link>https://academictorrents.com/details/c6c09f4c5d8c45018d5692e8e3e20fc51b3fe1d4</link>
<description>Rube Goldberg was a cartoonist and engineer who is best known for his series of cartoons which show complicated gadgets designed to complete simple tasks. The phrase “Rube Goldberg” has since been adopted as an adjective used to describe the act of accomplishing something simple through complicated means. When Rube Goldberg design is incorporated into the engineering classroom it allows for a unique blend of creativity and challenge that is often hard to accommodate in engineering.This paper will present a first look at my use of a Rube Goldberg design project as a tool for teaching engineering dynamics. The project was implemented as a semester long assignment.Students were divided into groups and assigned a theme picked from the topic areas covered in the engineering dynamics curriculum, for example: instantaneous centers of rotation, damped vibration, or impulsive motion. Each group must then build one stage of what will become a class Rube Goldberg machine under the stipulation that their stage must demonstrate the assigned topic area. Further, a report must be submitted describing the assigned topic area and how their stage demonstrates that topic area. One additional aspect to the project is that at the end of the semester, each stage will be assembled to build the full Rube Goldberg machine. As such, the student groups must communicate with each other to determine how to transition between stages. This aspect is intended to incorporate an additional layer of communication and collaboration early in the undergraduate engineering curriculum.This project is being piloted during the current semester and thus first results will be presented in the full paper regarding the outcomes associated with the use of this course project.</description>
<size>3859932</size>
</item><item>
<title>Twitter in the engineering classroom</title>
<category>Paper</category>
<infohash>20a31f46a273135ac9085854ad9b974e3ac51d9d</infohash>
<guid>https://academictorrents.com/details/20a31f46a273135ac9085854ad9b974e3ac51d9d</guid>
<link>https://academictorrents.com/details/20a31f46a273135ac9085854ad9b974e3ac51d9d</link>
<description>The micro-blogging platform, Twitter, has been employed by some in higher education as a tool for enhanced student engagement. This platform has shown promise as an educational tool for the promotion of critical reading and writing and concise expression of ideas. However, it is unclear in what settings and under what circumstances Twitter can be effectively employed in the engineering classroom. These questions were explored over a multi-semester study of student participation in directed social media discussions within the engineering classroom. The various cohorts of students included in this study were drawn from engineering courses. Comparisons were made between these multiple cohorts on the basis of active engagement in the assigned tasks, performance on homework and examinations, and overall course performance. Through the process of using this practice in the classroom, it was found that there was difficulty encouraging engineering students to participate in Twitter discussions regardless of the incentive provided. Limited evidence was found of greater course achievement correlating with greater participation in Twitter based tasks. It is expected that greater effort is required in familiarizing students with the Twitter platform and increasing their comfort level with asking questions and carrying out discussions in a public forum.</description>
<size>226947</size>
</item><item>
<title>Robotics REU in Undergraduate Engineering Research</title>
<category>Paper</category>
<infohash>3ae18f3696543951a83e0b580eb11ae2ece3c7be</infohash>
<guid>https://academictorrents.com/details/3ae18f3696543951a83e0b580eb11ae2ece3c7be</guid>
<link>https://academictorrents.com/details/3ae18f3696543951a83e0b580eb11ae2ece3c7be</link>
<description/>
<size>892516</size>
</item><item>
<title>A methodology for exploring, documenting, and improving humanitarian service learning in the university. </title>
<category>Paper</category>
<infohash>4cdade8fa3c6d451441fda7ae19bb53537b6ee21</infohash>
<guid>https://academictorrents.com/details/4cdade8fa3c6d451441fda7ae19bb53537b6ee21</guid>
<link>https://academictorrents.com/details/4cdade8fa3c6d451441fda7ae19bb53537b6ee21</link>
<description>Through the use of service learning in higher education, universities hope to both provide real benefit to the partnering community and allow students to develop a greater understanding of course curriculum, their discipline, and their personal positioning within society. Through these educational activities, service learning seeks to engage students in critical thinking processes while simultaneously achieving a greater sense of civic and social responsibility through targeted participation in meaningful community service activities. However, in practice, service learning can take a variety of forms predicated on technical, cultural, societal, and political constraints. Thus, while some work shows positive effects on students’ attitudes, social behaviour, and academic performance, less research has demonstrated long-term community impact. Nor has much research shown that participation in service learning has a long-term impact on students  ethical perspectives and frameworks, and whether those ethical frames carry on to their professional careers. Moreover, as institutions partner with such humanitarian service groups as Engineers Without Borders USA, we know considerably less about the institutional cultures and climates that are developed through such partnerships and how sustainable they are, given those inherent technical, political and cultural limitations. As a first step towards these goals, this paper proposes a methodology for investigating the impacts of service learning activities on both the students and communities involved.</description>
<size>241508</size>
</item><item>
<title>Incorporation of liberal education into the engineering curriculum at a polytechnic</title>
<category>Paper</category>
<infohash>c4b6df17cd68ce4afc1634bce40d45f95384bc50</infohash>
<guid>https://academictorrents.com/details/c4b6df17cd68ce4afc1634bce40d45f95384bc50</guid>
<link>https://academictorrents.com/details/c4b6df17cd68ce4afc1634bce40d45f95384bc50</link>
<description>Traditional engineering education often falls short when it comes to the inclusion of issues related to social justice, ethics, and globalization. While engineering programs are required to include ethics content for accreditation, most seem to rely primarily on general education electives, providing only a high-level overview and including the bare minimum in the program core. This can lead to an inconsistent student experience and minimal exposure to topics which are critically important for achieving worldwide equity and operating responsibly in the engineering workplace. Given the role that engineers play in economic development, this is unacceptable. It is therefore the responsibility of engineering educators to find a better way to shape the future of the engineering profession. This paper outlines the early efforts at integrating the topics of ethics, social justice, and social responsibility more directly into the engineering curriculum. This is approached from the perspectives of pedagogy, curriculum development, and service learning opportunities. It is within this context that the authors hope to influence students’ awareness of and connection to social and environmental issues as well as the ethical frameworks they develop and carry with them into their professional careers. This paper centers around the creation and delivery of a new introductory engineering course combining liberal education topics and introductory engineering topics. This course also includes a substantial design project which incorporates a cultural engagement component through collaboration with international partners. The first offering of this new course revealed that, while some reservations persist, students found value in exploring what it means to be an engineer in a broader global context.</description>
<size>5629988</size>
</item><item>
<title>[Coursera ] Text Mining and Analytics</title>
<category>Course</category>
<infohash>e2c129491a3841bfac5d7b08b41ad79387132a23</infohash>
<guid>https://academictorrents.com/details/e2c129491a3841bfac5d7b08b41ad79387132a23</guid>
<link>https://academictorrents.com/details/e2c129491a3841bfac5d7b08b41ad79387132a23</link>
<description/>
<size>1063576833</size>
</item><item>
<title>A Brief Review of Nature-Inspired Algorithms for Optimization</title>
<category>Paper</category>
<infohash>aec97f8374cfa5b8bce86cd542870fe849e1afb5</infohash>
<guid>https://academictorrents.com/details/aec97f8374cfa5b8bce86cd542870fe849e1afb5</guid>
<link>https://academictorrents.com/details/aec97f8374cfa5b8bce86cd542870fe849e1afb5</link>
<description/>
<size>161539</size>
</item><item>
<title>Pre-configured (Mint) linux based virtual machine image</title>
<category>Dataset</category>
<infohash>5ceb6902b46a344de6db18c2ec5a14bb24a7df4a</infohash>
<guid>https://academictorrents.com/details/5ceb6902b46a344de6db18c2ec5a14bb24a7df4a</guid>
<link>https://academictorrents.com/details/5ceb6902b46a344de6db18c2ec5a14bb24a7df4a</link>
<description>Pre-configured (Mint) linux based virtual machine image for courses taught by Swami Iyer at UMass Boston in Spring 2017.</description>
<size>3062967296</size>
</item><item>
<title>Microsoft Academic Graph - 2016/02/05</title>
<category>Dataset</category>
<infohash>1e0a00b9c606cf87c03e676f75929463c7756fb5</infohash>
<guid>https://academictorrents.com/details/1e0a00b9c606cf87c03e676f75929463c7756fb5</guid>
<link>https://academictorrents.com/details/1e0a00b9c606cf87c03e676f75929463c7756fb5</link>
<description>The Microsoft Academic Graph is a heterogeneous graph containing scientific publication records, citation relationships between those publications, as well as authors, institutions, journals, conferences, and fields of study. This graph is used to power experiences in Bing, Cortana, and in Microsoft Academic.</description>
<size>28942081611</size>
</item><item>
<title>The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus</title>
<category>Dataset</category>
<infohash>34e2b78745138186976cbc27939b1b34d18bd5b3</infohash>
<guid>https://academictorrents.com/details/34e2b78745138186976cbc27939b1b34d18bd5b3</guid>
<link>https://academictorrents.com/details/34e2b78745138186976cbc27939b1b34d18bd5b3</link>
<description>The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus (TIMIT) Training and Test Data The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems.  TIMIT has resulted from the joint efforts of several sites under sponsorship from the Defense Advanced Research Projects Agency - Information Science and Technology Office (DARPA-ISTO).  Text corpus design was a joint effort among the Massachusetts Institute of Technology (MIT), Stanford Research Institute (SRI), and Texas Instruments (TI).  The speech was recorded at TI, transcribed at MIT, and has been maintained, verified, and prepared for CD-ROM production by the National Institute of Standards and Technology (NIST).  This file contains a brief description of the TIMIT Speech Corpus.  Additional information including the referenced material and some relevant reprints of articles may be found in the printed documentation which is also available from NTIS (NTIS# PB91-100354). ## Corpus Speaker Distribution &amp;mdash; &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- TIMIT contains a total of 6300 sentences, 10 sentences spoken by each of 630 speakers from 8 major dialect regions of the United States.  Table 1 shows the number of speakers for the 8 dialect regions, broken down by sex.  The percentages are given in parentheses.  A speaker s dialect region is the geographical area of the U.S.  where they lived during their childhood years. The geographical areas correspond with recognized dialect regions in U.S. (Language Files, Ohio State University Linguistics Dept., 1982), with the exception of the Western region (dr7) in which dialect boundaries are not known with any confidence and dialect region 8 where the speakers moved around a lot during their childhood.     Table 1:  Dialect distribution of speakers Dialect Region(dr)    #Male    #Female    Total &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;  &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-  &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; 1         31 (63%)  18 (27%)   49 (8%) 2         71 (70%)  31 (30%)  102 (16%) 3         79 (67%)  23 (23%)  102 (16%) 4         69 (69%)  31 (31%)  100 (16%) 5         62 (63%)  36 (37%)   98 (16%) 6         30 (65%)  16 (35%)   46 (7%) 7         74 (74%)  26 (26%)  100 (16%) 8         22 (67%)  11 (33%)   33 (5%) &amp;mdash;&amp;mdash;&amp;mdash;     &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-  &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; 8        438 (70%) 192 (30%)  630 (100%) The dialect regions are: dr1:  New England dr2:  Northern dr3:  North Midland dr4:  South Midland dr5:  Southern dr6:  New York City dr7:  Western dr8:  Army Brat (moved around)     ## Corpus Text Material &amp;mdash; &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; The text material in the TIMIT prompts (found in the file "prompts.doc") consists of 2 dialect "shibboleth" sentences designed at SRI, 450 phonetically-compact sentences designed at MIT, and 1890 phonetically-diverse sentences selected at TI.  The dialect sentences (the SA sentences) were meant to expose the dialectal variants of the speakers and were read by all 630 speakers.  The phonetically-compact sentences were designed to provide a good coverage of pairs of phones, with extra occurrences of phonetic contexts thought to be either difficult or of particular interest.  Each speaker read 5 of these sentences (the SX sentences) and each text was spoken by 7 different speakers.  The phonetically-diverse sentences (the SI sentences) were selected from existing text sources - the Brown Corpus (Kuchera and Francis, 1967) and the Playwrights Dialog (Hultzen, et al., 1964) - so as to add diversity in sentence types and phonetic contexts.  The selection criteria maximized the variety of allophonic contexts found in the texts.  Each speaker read 3 of these sentences, with each sentence being read only by a single speaker. Table 2 summarizes the speech material in TIMIT.     Table 2:  TIMIT speech material Sentence Type   #Sentences   #Speakers   Total   #Sentences/Speaker &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;   &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; Dialect (SA)          2         630       1260           2 Compact (SX)        450           7       3150           5 Diverse (SI)       1890           1       1890           3 &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;   &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;-    &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; Total              2342                   6300          10     Suggested Training/Test Subdivision &amp;mdash; &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- The speech material has been subdivided into portions for training and testing.  The criteria for the subdivision is described in the file "testset.doc".  THIS SUBDIVISION HAS NO RELATION TO THE DATA DISTRIBUTED ON THE PROTOTYPE VERSION OF THE CDROM. ## Core Test Set: The test data has a core portion containing 24 speakers, 2 male and 1 female from each dialect region.  The core test speakers are shown in Table 3.  Each speaker read a different set of SX sentences.  Thus the core test material contains 192 sentences, 5 SX and 3 SI for each speaker, each having a distinct text prompt.     Table 3:  The core test set of 24 speakers Dialect        Male      Female &amp;mdash;&amp;mdash;&amp;mdash;-       &amp;mdash;&amp;mdash;&amp;mdash;     &amp;mdash;&amp;mdash;&amp;mdash; 1        DAB0, WBT0    ELC0 2        TAS1, WEW0    PAS0 3        JMP0, LNT0    PKT0 4        LLL0, TLS0    JLM0 5        BPM0, KLT0    NLP0 6        CMJ0, JDH0    MGD0 7        GRT0, NJM0    DHC0 8        JLN0, PAM0    MLD0     ## Complete Test Set: A more extensive test set was obtained by including the sentences from all speakers that read any of the SX texts included in the core test set.  In doing so, no sentence text appears in both the training and test sets.  This complete test set contains a total of 168 speakers and 1344 utterances, accounting for about 27% of the total speech material.  The resulting dialect distribution of the 168 speaker test set is given in Table 4.  The complete test material contains 624 distinct texts.     Table 4:  Dialect distribution for complete test set Dialect    #Male   #Female   Total &amp;mdash;&amp;mdash;&amp;mdash;-    &amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;- 1           7        4       11 2          18        8       26 3          23        3       26 4          16       16       32 5          17       11       28 6           8        3       11 7          15        8       23 8           8        3       11 &amp;mdash;&amp;mdash;-      &amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash;&amp;mdash; Total       112       56      168     CDROM TIMIT Directory and File Structure &amp;mdash; &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; The speech and associated data is organized on the CD-ROM according to the following hierarchy: /&lt;CORPUS&gt;/&lt;USAGE&gt;/&lt;DIALECT&gt;/&lt;SEX&gt;&lt;SPEAKER_ID&gt;/&lt;SENTENCE_ID&gt;.&lt;FILE_TYPE&gt;</description>
<size>440207227</size>
</item><item>
<title>US Stock Market End of Day dataset</title>
<category>Dataset</category>
<infohash>c5a49e46249fef6a3219919fef96fd0265da4d3a</infohash>
<guid>https://academictorrents.com/details/c5a49e46249fef6a3219919fef96fd0265da4d3a</guid>
<link>https://academictorrents.com/details/c5a49e46249fef6a3219919fef96fd0265da4d3a</link>
<description>4974 Stock Symbols End of day data. Includes close open high low volume and date. Data was collected from Google finance public data.     +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ | Table    | Size in MB | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ | surf_eod |    1109.00 | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ 1 row in set (0.00 sec) mysql&gt; SELECT COUNT(DISTINCT( ticker )) FROM surf_eod; +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ | COUNT(DISTINCT( ticker )) | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ |                      4974 | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ 1 row in set (6.31 sec) mysql&gt; describe surf_eod; +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ | Field  | Type        | Null | Key | Default           | Extra                       | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ | ticker | varchar(10) | YES  | MUL | NULL              |                             | | date   | date        | YES  |     | NULL              |                             | | close  | varchar(20) | YES  |     | NULL              |                             | | high   | varchar(20) | YES  |     | NULL              |                             | | low    | varchar(20) | YES  |     | NULL              |                             | | open   | varchar(20) | YES  |     | NULL              |                             | | volume | varchar(20) | YES  |     | NULL              |                             | | time   | timestamp   | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;+&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-+ 8 rows in set (0.04 sec) mysql&gt; SELECT COUNT(*) FROM surf_eod; +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ | COUNT(*) | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ | 17726722 | +&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;+ 1 row in set (25.18 sec)</description>
<size>250708117</size>
</item><item>
<title>AOL Search Data 20M web queries (2006)</title>
<category>Dataset</category>
<infohash>cd339bddeae7126bb3b15f3a72c903cb0c401bd1</infohash>
<guid>https://academictorrents.com/details/cd339bddeae7126bb3b15f3a72c903cb0c401bd1</guid>
<link>https://academictorrents.com/details/cd339bddeae7126bb3b15f3a72c903cb0c401bd1</link>
<description>#### 500k User Session Collection This collection is distributed for NON-COMMERCIAL RESEARCH USE ONLY. Any application of this collection for commercial purposes is STRICTLY PROHIBITED. #### Brief description: This collection consists of ~20M web queries collected from ~650k users over three months. The data is sorted by anonymous user ID and sequentially arranged. The goal of this collection is to provide real query log data that is based on real users. It could be used for personalization, query reformulation or other types of search research. The data set includes AnonID, Query, QueryTime, ItemRank, ClickURL.     AnonID - an anonymous user ID number. Query  - the query issued by the user, case shifted with most punctuation removed. QueryTime - the time at which the query was submitted for search. ItemRank  - if the user clicked on a search result, the rank of the item on which they clicked is listed. ClickURL  - if the user clicked on a search result, the domain portion of the URL in the clicked result is listed.     Each line in the data represents one of two types of events: 1. A query that was NOT followed by the user clicking on a result item. 2. A click through on an item in the result list returned from a query. In the first case (query only) there is data in only the first three columns/fields &amp;mdash; namely AnonID, Query, and QueryTime (see above). In the second case (click through), there is data in all five columns.  For click through events, the query that preceded the click through is included.  Note that if a user clicked on more than one result in the list returned from a single query, there will be TWO lines in the data to represent the two events.  Also note that if the user requested the next "page" or results for some query, this appears as a subsequent identical query with a later time stamp. CAVEAT EMPTOR &amp;mdash; SEXUALLY EXPLICIT DATA!  Please be aware that these queries are not filtered to remove any content.  Pornography is prevalent on the Web and unfiltered search engine logs contain queries by users who are looking for pornographic material.  There are queries in this collection that use SEXUALLY EXPLICIT LANGUAGE.  This collection of data is intended for use by mature adults who are not easily offended by the use of pornographic search terms.  If you are offended by sexually explicit language you should not read through this data.  Also be aware that in some states it may be illegal to expose a minor to this data.  Please understand that the data represents REAL WORLD USERS, un-edited and randomly sampled, and that AOL is not the author of this data. Basic Collection Statistics Dates: 01 March, 2006 - 31 May, 2006 Normalized queries: 36,389,567 lines of data 21,011,340 instances of new queries (w/ or w/o click-through) 7,887,022 requests for "next page" of results 19,442,629 user click-through events 16,946,938 queries w/o user click-through 10,154,742 unique (normalized) queries 657,426 unique user ID s</description>
<size>460409936</size>
</item><item>
<title>MovieLens 20M Dataset</title>
<category>Dataset</category>
<infohash>296054417b4d8eeeb4c7b1c842570bf792ee4d14</infohash>
<guid>https://academictorrents.com/details/296054417b4d8eeeb4c7b1c842570bf792ee4d14</guid>
<link>https://academictorrents.com/details/296054417b4d8eeeb4c7b1c842570bf792ee4d14</link>
<description>Stable benchmark dataset. 20 million ratings and 465,000 tag applications applied to 27,000 movies by 138,000 users. Includes tag genome data with 12 million relevance scores across 1,100 tags. Released 4/2015; updated 10/2016 to update links.csv and add tag genome data. ### Summary This dataset (ml-20m) describes 5-star rating and free-text tagging activity from MovieLens, a movie recommendation service. It contains 20000263 ratings and 465564 tag applications across 27278 movies. These data were created by 138493 users between January 09, 1995 and March 31, 2015. This dataset was generated on October 17, 2016. Users were selected at random for inclusion. All selected users had rated at least 20 movies. No demographic information is included. Each user is represented by an id, and no other information is provided. The data are contained in six files, genome-scores.csv, genome-tags.csv, links.csv, movies.csv, ratings.csv and tags.csv. More details about the contents and use of all these files follows. This and other GroupLens data sets are publicly available for download at http://grouplens.org/datasets/. #### Further Information About GroupLens GroupLens is a research group in the Department of Computer Science and Engineering at the University of Minnesota. Since its inception in 1992, GroupLens s research projects have explored a variety of fields including: recommender systems online communities mobile and ubiquitious technologies digital libraries local geographic information systems GroupLens Research operates a movie recommender based on collaborative filtering, MovieLens, which is the source of these data. We encourage you to visit http://movielens.org to try it out! If you have exciting ideas for experimental work to conduct on MovieLens, send us an email at grouplens-info@cs.umn.edu - we are always interested in working with external collaborators. #### Content and Use of Files #### Verifying the Dataset Contents We encourage you to verify that the dataset you have on your computer is identical to the ones hosted at grouplens.org. This is an important step if you downloaded the dataset from a location other than grouplens.org, or if you wish to publish research results based on analysis of the MovieLens dataset. We provide a MD5 checksum with the same name as the downloadable .zip file, but with a .md5 file extension. To verify the dataset: on linux md5sum ml-20m.zip; cat ml-20m.zip.md5 on OSX md5 ml-20m.zip; cat ml-20m.zip.md5 windows users can download a tool from Microsoft (or elsewhere) that verifies MD5 checksums Check that the two lines of output contain the same hash value. #### Formatting and Encoding The dataset files are written as comma-separated values files with a single header row. Columns that contain commas (,) are escaped using double-quotes ("). These files are encoded as UTF-8. If accented characters in movie titles or tag values (e.g. Misérables, Les (1995)) display incorrectly, make sure that any program reading the data, such as a text editor, terminal, or script, is configured for UTF-8. #### User Ids MovieLens users were selected at random for inclusion. Their ids have been anonymized. User ids are consistent between ratings.csv and tags.csv (i.e., the same id refers to the same user across the two files). #### Movie Ids Only movies with at least one rating or tag are included in the dataset. These movie ids are consistent with those used on the MovieLens web site (e.g., id 1 corresponds to the URL https://movielens.org/movies/1). Movie ids are consistent between ratings.csv, tags.csv, movies.csv, and links.csv (i.e., the same id refers to the same movie across these four data files). #### Ratings Data File Structure (ratings.csv) All ratings are contained in the file ratings.csv. Each line of this file after the header row represents one rating of one movie by one user, and has the following format: userId,movieId,rating,timestamp The lines within this file are ordered first by userId, then, within user, by movieId. Ratings are made on a 5-star scale, with half-star increments (0.5 stars - 5.0 stars). Timestamps represent seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970. Tags Data File Structure (tags.csv) All tags are contained in the file tags.csv. Each line of this file after the header row represents one tag applied to one movie by one user, and has the following format: userId,movieId,tag,timestamp The lines within this file are ordered first by userId, then, within user, by movieId. Tags are user-generated metadata about movies. Each tag is typically a single word or short phrase. The meaning, value, and purpose of a particular tag is determined by each user. Timestamps represent seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970. Movies Data File Structure (movies.csv) Movie information is contained in the file movies.csv. Each line of this file after the header row represents one movie, and has the following format: movieId,title,genres Movie titles are entered manually or imported from https://www.themoviedb.org/, and include the year of release in parentheses. Errors and inconsistencies may exist in these titles. Genres are a pipe-separated list, and are selected from the following: Action Adventure Animation Children s Comedy Crime Documentary Drama Fantasy Film-Noir Horror Musical Mystery Romance Sci-Fi Thriller War Western (no genres listed) Links Data File Structure (links.csv) Identifiers that can be used to link to other sources of movie data are contained in the file links.csv. Each line of this file after the header row represents one movie, and has the following format: movieId,imdbId,tmdbId movieId is an identifier for movies used by https://movielens.org. E.g., the movie Toy Story has the link https://movielens.org/movies/1. imdbId is an identifier for movies used by http://www.imdb.com. E.g., the movie Toy Story has the link http://www.imdb.com/title/tt0114709/. tmdbId is an identifier for movies used by https://www.themoviedb.org. E.g., the movie Toy Story has the link https://www.themoviedb.org/movie/862. Use of the resources listed above is subject to the terms of each provider. Tag Genome (genome-scores.csv and genome-tags.csv) This data set includes a current copy of the Tag Genome. The tag genome is a data structure that contains tag relevance scores for movies. The structure is a dense matrix: each movie in the genome has a value for every tag in the genome. As described in this article, the tag genome encodes how strongly movies exhibit particular properties represented by tags (atmospheric, thought-provoking, realistic, etc.). The tag genome was computed using a machine learning algorithm on user-contributed content including tags, ratings, and textual reviews. The genome is split into two files. The file genome-scores.csv contains movie-tag relevance data in the following format: movieId,tagId,relevance The second file, genome-tags.csv, provides the tag descriptions for the tag IDs in the genome file, in the following format: tagId,tag The tagId values are generated when the data set is exported, so they may vary from version to version of the MovieLens data sets. #### Cross-Validation Prior versions of the MovieLens dataset included either pre-computed cross-folds or scripts to perform this computation. We no longer bundle either of these features with the dataset, since most modern toolkits provide this as a built-in feature. If you wish to learn about standard approaches to cross-fold computation in the context of recommender systems evaluation, see LensKit for tools, documentation, and open-source code examples.</description>
<size>198702078</size>
</item><item>
<title>Feathered Dinosaur Tail Encased in Amber</title>
<category>Dataset</category>
<infohash>cc13f33999db0e4b1da7dbd27d9c086f13f1614f</infohash>
<guid>https://academictorrents.com/details/cc13f33999db0e4b1da7dbd27d9c086f13f1614f</guid>
<link>https://academictorrents.com/details/cc13f33999db0e4b1da7dbd27d9c086f13f1614f</link>
<description>This flash video features photographs of an amber sample encasing dinosaur tail (feathered)!, fellow traveler Cretaceous cricket-like critter, ants and plants. The amber was found by scientists at an amber market in Myitkyina, Myanmar (Burma) in 2015. The sample is about 99 million years old.</description>
<size>14510667</size>
</item><item>
<title>CH03_002.bag.tar.gz</title>
<category>Dataset</category>
<infohash>48a5a65231a3ad51939251c94f92716ef337d9c1</infohash>
<guid>https://academictorrents.com/details/48a5a65231a3ad51939251c94f92716ef337d9c1</guid>
<link>https://academictorrents.com/details/48a5a65231a3ad51939251c94f92716ef337d9c1</link>
<description>Continuous recording of a down/back trip on El Camino Real with two GPS fix / IMU sources. Contains LIDAR data from Velodyne HDL-32E (previous runs have used VLP-16) https://github.com/udacity/self-driving-car/tree/master/datasets/CH3</description>
<size>60064047652</size>
</item><item>
<title>Ch2_001: Udacity Self Driving Car</title>
<category>Dataset</category>
<infohash>69343f173317f847e489d7272920c86795029434</infohash>
<guid>https://academictorrents.com/details/69343f173317f847e489d7272920c86795029434</guid>
<link>https://academictorrents.com/details/69343f173317f847e489d7272920c86795029434</link>
<description>https://github.com/udacity/self-driving-car/tree/master/datasets/CH2</description>
<size>477576980</size>
</item><item>
<title>CHX_001: Udacity Self Driving Car</title>
<category>Dataset</category>
<infohash>989ad2fd8bb95c18e07ada27cb100e5b9c14224d</infohash>
<guid>https://academictorrents.com/details/989ad2fd8bb95c18e07ada27cb100e5b9c14224d</guid>
<link>https://academictorrents.com/details/989ad2fd8bb95c18e07ada27cb100e5b9c14224d</link>
<description>https://github.com/udacity/self-driving-car/tree/master/datasets/CHX</description>
<size>1450155797</size>
</item><item>
<title>CH3_001: Udacity Self Driving Car</title>
<category>Dataset</category>
<infohash>769f5a4641d0d26f64d4b550bdfd88b3f4582b11</infohash>
<guid>https://academictorrents.com/details/769f5a4641d0d26f64d4b550bdfd88b3f4582b11</guid>
<link>https://academictorrents.com/details/769f5a4641d0d26f64d4b550bdfd88b3f4582b11</link>
<description>https://github.com/udacity/self-driving-car/tree/master/datasets/CH3</description>
<size>15423386585</size>
</item><item>
<title>Ch2_002: Udacity Self Driving Car</title>
<category>Dataset</category>
<infohash>72dfc74e541cb2c53c027d233007808bc42bf103</infohash>
<guid>https://academictorrents.com/details/72dfc74e541cb2c53c027d233007808bc42bf103</guid>
<link>https://academictorrents.com/details/72dfc74e541cb2c53c027d233007808bc42bf103</link>
<description>https://github.com/udacity/self-driving-car/tree/master/datasets/CH2</description>
<size>4716005956</size>
</item><item>
<title>Ch2_001: Udacity Self Driving Car</title>
<category>Dataset</category>
<infohash>692ee7e0c63fb2212bfe4a62a39ce71ee9b16fb3</infohash>
<guid>https://academictorrents.com/details/692ee7e0c63fb2212bfe4a62a39ce71ee9b16fb3</guid>
<link>https://academictorrents.com/details/692ee7e0c63fb2212bfe4a62a39ce71ee9b16fb3</link>
<description>https://github.com/udacity/self-driving-car/tree/master/datasets/CH2</description>
<size>477579591</size>
</item><item>
<title>Udacity SDC Dataset: 2016-11-07</title>
<category>Dataset</category>
<infohash>d03edc3fd5320a9a0436a4750a95730783a13d5d</infohash>
<guid>https://academictorrents.com/details/d03edc3fd5320a9a0436a4750a95730783a13d5d</guid>
<link>https://academictorrents.com/details/d03edc3fd5320a9a0436a4750a95730783a13d5d</link>
<description>Conditions: Overcast Evening / Night Sensors: 3 camera streams, 1 VLP-16 LIDAR packet stream, 1 Xsens IMU positional/GPS fix data stream, standard vehicle location/state information This is a continuous recording of El Camino from Mountain View to South San Francisco (and back). The trip is separated into two ROS bag files corresponding with the direction of the trip. New in this dataset is the presence of the more accurate GPS fix from the Xsens IMU. This more accurate GPS location is available in the  /fix  topic. Accelerometer information is in the  /imu/data  topic. This was generated for Challenge Three and LIDAR point cloud creation (non-challenge related). Those using this dataset for Challenge 2 need to prune out lane changes and stops.</description>
<size>24837842589</size>
</item><item>
<title>Challenges 2 &amp; 3: El Camino Training Data</title>
<category>Dataset</category>
<infohash>e9b47deb3391e33df794e5ec4399d38ef8767c07</infohash>
<guid>https://academictorrents.com/details/e9b47deb3391e33df794e5ec4399d38ef8767c07</guid>
<link>https://academictorrents.com/details/e9b47deb3391e33df794e5ec4399d38ef8767c07</link>
<description>NOTE: LEFT AND CENTER CAMERAS WERE ACCIDENTALLY SWAPPED IN THIS CONFIGURATION DUE TO PERMANENT WINDSHIELD MOUNTING This torrent contains training data in rosbag format for Challenges 2 &amp; 3. It contains the ROS bags from the previously released PNG format official training set, as well as some extra unreleased data from that day.</description>
<size>18709929098</size>
</item><item>
<title>Google Open Images</title>
<category>Dataset</category>
<infohash>9e9194e21ce045deee8d811481b4cd676b20b06b</infohash>
<guid>https://academictorrents.com/details/9e9194e21ce045deee8d811481b4cd676b20b06b</guid>
<link>https://academictorrents.com/details/9e9194e21ce045deee8d811481b4cd676b20b06b</link>
<description>Open Images is a dataset of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories. This is train and validation part of dataset. All images resized to 420 on small side. The name of the saved image corresponds to Google s ImageID which can be used to look up labels in the open image dataset. Train all: 9011219 downloaded: 8798643 labeled: 8646180 post-download clean: 8591564 Validation all: 167056 downloaded: 160957 post-download clean: 159847</description>
<size>456558676166</size>
</item><item>
<title>Biology-1B-Spring2015-UCBerkeley</title>
<category>Course</category>
<infohash>6858d4831e4b76752097772ca9850b2ae3174e25</infohash>
<guid>https://academictorrents.com/details/6858d4831e4b76752097772ca9850b2ae3174e25</guid>
<link>https://academictorrents.com/details/6858d4831e4b76752097772ca9850b2ae3174e25</link>
<description/>
<size>14503555013</size>
</item><item>
<title>Biology-1A-Spring2013-UCBerkeley-Jennifer-Doudna</title>
<category>Course</category>
<infohash>256d7b478f5c95275031e2d38a101091c0a9f5d1</infohash>
<guid>https://academictorrents.com/details/256d7b478f5c95275031e2d38a101091c0a9f5d1</guid>
<link>https://academictorrents.com/details/256d7b478f5c95275031e2d38a101091c0a9f5d1</link>
<description>https://i.imgur.com/MltJEbJ.png</description>
<size>4948302447</size>
</item><item>
<title>MIT 7.012 Introduction to Biology - Fall 2004</title>
<category>Course</category>
<infohash>7e1ba0f77f65f5e43e3147691da2ddc36330a361</infohash>
<guid>https://academictorrents.com/details/7e1ba0f77f65f5e43e3147691da2ddc36330a361</guid>
<link>https://academictorrents.com/details/7e1ba0f77f65f5e43e3147691da2ddc36330a361</link>
<description>The MIT Biology Department core courses, 7.012, 7.013, and 7.014, all cover the same core material, which includes the fundamental principles of biochemistry, genetics, molecular biology, and cell biology. Biological function at the molecular level is particularly emphasized and covers the structure and regulation of genes, as well as, the structure and synthesis of proteins, how these molecules are integrated into cells, and how these cells are integrated into multicellular systems and organisms. In addition, each version of the subject has its own distinctive material. 7.012 focuses on the exploration of current research in cell biology, immunology, neurobiology, genomics, and molecular medicine. Course Highlights This course features a complete set of video lectures by Professor Eric Lander, Director of the Broad Institute at MIT and a principal leader of the Human Genome Project and Professor Robert A. Weinberg, winner of the 1997 National Medal of Science. Education development efforts for these introductory biology courses are one of many activities conducted by the HHMI Education Group Group at MIT. This group focuses on curriculum development work for creating teaching tools in undergraduate biology courses. | LEC #                           | TOPICS                                        | KEY DATES         | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| | 1                               | Introduction                                  |                   | | 2                               | Biochemistry 1                                |                   | | 3-5                             | Biochemistry 2                                |                   | |                                 |                                               |                   | | Biochemistry 3                  |                                               |                   | |                                 |                                               |                   | | Biochemistry 4                  | Problem set 1 due in lecture 5                |                   | | 6                               | Genetics 1                                    |                   | | 7                               | Genetics 2                                    |                   | | 8                               | Genetics 3                                    | Problem set 2 due | | 9                               | Human Genetics                                |                   | | 10                              | Molecular Biology 1                           |                   | | 11                              | Molecular Biology 2                           |                   | | Quiz 1, Lectures 1-10           |                                               |                   | | 12                              | Molecular Biology 3                           |                   | | 13                              | Gene Regulation                               |                   | | 14                              | Protein Localization                          |                   | | 15-18                           | Recombinant DNA 1                             |                   | |                                 |                                               |                   | | Recombinant DNA 2               |                                               |                   | |                                 |                                               |                   | | Recombinant DNA 3               |                                               |                   | |                                 |                                               |                   | | Recombinant DNA 4               | Problem set 3 due in lecture 15               |                   | |                                 |                                               |                   | | Problem set 4 due in lecture 18 |                                               |                   | | Quiz 2, Lectures 11-17          |                                               |                   | | 19                              | Cell Cycle/Signaling                          |                   | | 20                              | Cancer                                        |                   | | 21                              | Virology/Tumor Viruses                        |                   | | 22-23                           | Immunology 1                                  |                   | |                                 |                                               |                   | | Immunology 2                    | Problem set 5 due in lecture 23               |                   | | 24                              | AIDS                                          |                   | | 25                              | Genomics                                      |                   | | Quiz 3, Lectures 18-24          |                                               |                   | | 26                              | Nervous System 1                              |                   | | 27                              | Nervous System 2                              |                   | | 28                              | Nervous System 3                              |                   | | 29-30                           | Stem Cells/Cloning 1                          |                   | |                                 |                                               |                   | | Stem Cells/Cloning 2            |                                               |                   | | 31                              | Molecular Medicine 1                          |                   | | 32                              | Molecular Evolution                           |                   | | 33                              | Molecular Medicine 2                          | Problem set 6 due | | 34                              | Human Polymorphisms and Cancer Classification |                   | | 35                              | Future of Biology                             |                   |</description>
<size>10328146604</size>
</item><item>
<title>MIT OCW Systems Biology 8.591J Fall 14</title>
<category>Course</category>
<infohash>98961a881e8fd4139f4d1e09f0b7a4badfab7c9d</infohash>
<guid>https://academictorrents.com/details/98961a881e8fd4139f4d1e09f0b7a4badfab7c9d</guid>
<link>https://academictorrents.com/details/98961a881e8fd4139f4d1e09f0b7a4badfab7c9d</link>
<description>### Course Description This course provides an introduction to cellular and population-level systems biology with an emphasis on synthetic biology, modeling of genetic networks, cell-cell interactions, and evolutionary dynamics. Cellular systems include genetic switches and oscillators, network motifs, genetic network evolution, and cellular decision-making. Population-level systems include models of pattern formation, cell-cell communication, and evolutionary systems biology. ### Prerequisites Given the wide range of backgrounds among students in this class we will try to avoid unnecessary jargon and mathematics. However, it will be very helpful if you are comfortable with the material in Introductory Biology 7.012, Differential Equations 18.03, and Probability 18.05. In addition, each weekly problem set will have a computational problem, so prior experience with a computational package such as MATLAB®, Mathematica®, or Python is expected. The "officially supported" package will be Python (sample code, etc), but problems can be done in any language. ### Textbooks Required Textbook Alon, Uri. An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman &amp; Hall / CRC, 2006. ISBN: 9781584886426. [Preview with Google Books] Nowak, M. A. Evolutionary Dynamics: Exploring the Equations of Life. Belknap Press, 2006. ISBN: 9780674023383. [Preview with Google Books] Supplementary Reading Alberts, Bruce. Essential Cell Biology. Garland Science, 2009. ISBN: 9780815341291. Strogatz, Steven H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Westview Press, 2014. ISBN: 9780813349107. [Preview with Google Books] | LEC # | TOPICS                                                                                               | KEY DATES          | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1     | Introduction to the class and overview of topics. Basic concepts in networks and chemical reactions. |                    | | 2     | Input function of a gene, Michaelis-Menten kinetics, and cooperativity                               |                    | | 3     | Autoregulation, feedback and bistability                                                             | Problem Set 1 due  | | 4     | Introduction to synthetic biology and stability analysis in the toggle switch                        |                    | | 5     | Oscillatory genetic networks                                                                         | Problem Set 2 due  | | 6     | Graph properties of transcription networks                                                           |                    | | 7     | Feed-forward loop network motif                                                                      | Problem Set 3 due  | | 8     | Introduction to stochastic gene expression                                                           |                    | | 9     | Causes and consequences of stochastic gene expression                                                | Problem Set 4 due  | | 10    | Stochastic modeling—The master equation, Fokker-Planck Equation, and the Gillespie algorithm         |                    | | 11    | Life at low Reynold’s number                                                                         | Problem Set 5 due  | | 12    | Robustness and bacterial chemotaxis                                                                  |                    | |       | No Lecture                                                                                           | Midterm 1          | | 13    | Robustness in development and pattern formation                                                      | Problem Set 6 due  | | 14    | Introduction to microbial evolution experiments, and optimal gene circuit design                     |                    | | 15    | Evolution in finite populations, genetic drift, and the theory of neutral molecular evolution        | Problem Set 6 due  | | 16    | Clonal interference and the distribution of beneficial mutations                                     |                    | | 17    | Fitness landscapes and sequence spaces                                                               | Problem Set 7 due  | | 18    | Evolutionary games                                                                                   |                    | |       | No Lecture                                                                                           | Midterm 2          | | 19    | Survival in fluctuating environments                                                                 | Problem Set 8 due  | | 20    | Parasites, the evolution of virulence and sex                                                        |                    | | 21    | Interspecies interactions, the Lotka-Volterra model, and predator-prey oscillations                  | Problem Set 9 due  | | 22    | Ecosystem stability, critical transitions, and the maintenance of biodiversity                       |                    | | 23    | Dynamics of populations in space                                                                     | Problem Set 10 due | | 24    | The neutral theory of ecology                                                                        |                    | |       |                                                                                                      | Final Exam         | Instructor(s) Prof. Jeff Gore MIT Course Number 8.591J / 7.81J / 7.32 As Taught In Fall 2014 Level Undergraduate / Graduate</description>
<size>4478312051</size>
</item><item>
<title>MIT Foundations of Computational and Systems Biology 7.91J</title>
<category>Course</category>
<infohash>90435db14989b415561ef53a3a12657a13d9d9fa</infohash>
<guid>https://academictorrents.com/details/90435db14989b415561ef53a3a12657a13d9d9fa</guid>
<link>https://academictorrents.com/details/90435db14989b415561ef53a3a12657a13d9d9fa</link>
<description>The MIT Initiative in Computational and Systems Biology (CSBi) is a campus-wide research and education program that links biology, engineering, and computer science in a multidisciplinary approach to the systematic analysis and modeling of complex biological phenomena. This course is one of a series of core subjects offered through the CSB Ph.D program, for students with an interest in interdisciplinary training and research in the area of computational and systems biology. ### Course Description This course is an introduction to computational biology emphasizing the fundamentals of nucleic acid and protein sequence and structural analysis; it also includes an introduction to the analysis of complex biological systems. Topics covered in the course include principles and methods used for sequence alignment, motif finding, structural modeling, structure prediction and network modeling, as well as currently emerging research areas. This course is designed for advanced undergraduates and graduate students with strong backgrounds in either molecular biology or computer science, but not necessarily both. The scripting language Python—which is widely used for bioinformatics and computational biology—will be used; foundational material covering basic programming skills will be provided by the teaching assistants. Graduate versions of the course involve an additional project component. ### Prerequisites There are different prerequisites for the various versions of the course. See the table for clarification. 7.01 Fundamentals of Biology 7.05 General Biochemistry 5.07 Biological Chemistry 6.00 Introduction to Computer Science and Programming 6.01 Introduction to Electrical Engineering and Computer Science 18.440 Probability and Random Variables 6.041 Probabilistic Systems Analysis and Applied Probability | SES #                                | TOPICS                                                                                                                                    | LECTURERS  | KEY DATES                      | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | L1                                   | Course Introduction: History of Computational Biology; Overview of the Course; Course Policies and Mechanics; DNA Sequencing Technologies | CB, DG, EF |                                | | Genomic Analysis                     |                                                                                                                                           |            |                                | | L2                                   | Local Alignment (BLAST) and Statistics                                                                                                    | CB         |                                | | R1                                   | Statistics; Significance Testing; Bonferroni Correction                                                                                   | TA         |                                | | L3                                   | Global Alignment of Protein Sequences (NW, SW, PAM, BLOSUM)                                                                               | CB         | Project: Interests Due         | | L4                                   | Comparative Genomic Analysis of Gene Regulation                                                                                           | CB         |                                | | R2                                   | Clustering, Model Selection, and BIC Scores                                                                                               | TA         |                                | | Genomic Analysis—Next Gen Sequencing |                                                                                                                                           |            |                                | | L5                                   | Library Complexity and Short Read Alignment (Mapping)                                                                                     | DG         | Problem Set 1 Due              | | R3                                   | Burrows–Wheeler Transform (BWT) and Alignments. Guest Lecture: Heng Li (Broad Institute)                                                  | GL         |                                | | L6                                   | Genome Assembly                                                                                                                           | DG         | Project: Teams Due             | | L7                                   | ChIP-seq Analysis; DNA-protein Interactions                                                                                               | DG         |                                | | R4                                   | Simultaneous ChIP-seq Peak Discovery and Motif Sampling                                                                                   | TA         |                                | | L8                                   | RNA-sequence Analysis: Expression, Isoforms                                                                                               | DG         |                                | | Modeling Biological Function         |                                                                                                                                           |            |                                | | L9                                   | Modeling and Discovery of Sequence Motifs (Gibbs Sampler, Alternatives)                                                                   | CB         |                                | | R5                                   | Gene Expression Program Discovery Using Topic Models                                                                                      | TA         |                                | | L10                                  | Markov and Hidden Markov Models of Genomic and Protein Features                                                                           | CB         |                                | | L11                                  | RNA Secondary Structure—Biological Functions and Prediction                                                                               | CB         | Problem Set 2 Due              | | R6                                   | Probabilistic Grammatical Models of RNA Structure                                                                                         | TA         |                                | | E1                                   | Exam 1                                                                                                                                    |            |                                | | Proteomics                           |                                                                                                                                           |            |                                | | L12                                  | Introduction to Protein Structure; Structure Comparison and Classification                                                                | EF         |                                | | R7                                   | Protein Amino Acid Sidechain Packing Using Markov Random Fields                                                                           | TA         | Project: Research Strategy Due | | L13                                  | Predicting Protein Structure                                                                                                              | EF         |                                | | L14                                  | Predicting Protein Interactions                                                                                                           | EF         | Problem Set 3 Due              | | R8                                   | Protein / Protein Interaction Prediction Using Threading                                                                                  | TA         |                                | | Regulatory Networks                  |                                                                                                                                           |            |                                | | L15                                  | Gene Regulatory Networks                                                                                                                  | EF         |                                | | L16                                  | Protein Interaction Networks                                                                                                              | EF         |                                | | R9                                   | Regression Trees                                                                                                                          | TA         |                                | | L17                                  | Logic Modeling of Cell Signaling Networks. Guest Lecture: Doug Lauffenburger                                                              | GL         |                                | | L18                                  | Analysis of Chromatin Structure                                                                                                           | DG         | Problem Set 4 Due              | | R10                                  | BayesNets                                                                                                                                 | TA         |                                | | Computational Genetics               |                                                                                                                                           |            |                                | | L19                                  | Discovering Quantitative Trait Loci (QTLs)                                                                                                | DG         | Project: Written Report Due    | | R11                                  | Narrow Sense Heritability                                                                                                                 | TA         |                                | | L20                                  | Human Genetics, SNPs, and Genome Wide Associate Studies                                                                                   | DG         |                                | | L21                                  | Synthetic Biology: From Parts to Modules to Therapeutic Systems. Guest Lecture: Ron Weiss                                                 | GL         | Problem Set 5 Due              | | R12                                  | Exam Review                                                                                                                               | TA         |                                | | E2                                   | Exam 2                                                                                                                                    |            |                                | | L22                                  | Causality, Natural Computing, and Engineering Genomes. Guest Lecture: George Church                                                       | GL         |                                | | P1                                   | Presentations                                                                                                                             |            |                                | | P2                                   | Presentations (cont.)                                                                                                                     |            |                                | ### Instructor(s) Prof. Christopher Burge Prof. David Gifford Prof. Ernest Fraenkel MIT Course Number 7.91J / 20.490J / 20.390J / 7.36J / 6.802J / 6.874J / HST.506J As Taught In Spring 2014 Level Undergraduate / Graduate</description>
<size>4088800731</size>
</item><item>
<title>Challenge 2 Train and Test Sets</title>
<category>Dataset</category>
<infohash>9b0c6c1044633d076b0f73dc312aa34433a25c56</infohash>
<guid>https://academictorrents.com/details/9b0c6c1044633d076b0f73dc312aa34433a25c56</guid>
<link>https://academictorrents.com/details/9b0c6c1044633d076b0f73dc312aa34433a25c56</link>
<description>Challenge 2 Image Sets. Training data is accompanied by interpolated steering values. Test data only has center image frames.</description>
<size>70189157929</size>
</item><item>
<title>Udacity Self Driving Car Dataset 3-1: El Camino</title>
<category>Dataset</category>
<infohash>c9dae89d2e3897e6aa98c0c8196348c444998a2a</infohash>
<guid>https://academictorrents.com/details/c9dae89d2e3897e6aa98c0c8196348c444998a2a</guid>
<link>https://academictorrents.com/details/c9dae89d2e3897e6aa98c0c8196348c444998a2a</link>
<description>Dataset of two drives from the Udacity office to San Francisco up (and down) El Camino Real, the path of the final drive and where the test sets of Challenges 2 &amp; 3 will take place. Sunny afternoon and evening drives, an attempt was made to stay in the same lane, but obstacles and construction sometimes required lane changes. While this is an official dataset for Challenge 3, it has all required information to be used in Challenge 2. Note, only the center camera feed will be available in the test set. Also, this dataset includes Velodyne VLP-16 LIDAR packets. This is so that you may see the format of the LIDAR we will be publishing, but it is not useful (or allowed) in Challenges 2&amp;3. # To utilize compressed image topics You need the install a dependency: $ sudo apt-get install ros-indigo-image-transport* # To playback data copy the udacity_launch package found on our github project () to your catkin workspace, compile and source so that it is reachable. Location of launch files: https://github.com/udacity/self-driving-car/tree/master/datasets/udacity_launch $ cd udacity-dataset-2-1 $ rosbag play &amp;mdash;clock *.bag $ roslaunch udacity_launch bag_play.launch # For visualization $ roslaunch udacity_launch rviz.launch # Dataset Info MD5:	13f107727bed0ee5731647b4e114a545 file:		udacity-dataset_2016-10-20-13-46-48_0.bag duration:	1hr 25:26s (5126s) start:	Oct 20 2016 13:46:48.34 (1476996408.34) end:		Oct 20 2016 15:12:15.15 (1477001535.15) file:		udacity-dataset_2016-10-20-15-13-30_0.bag duration:	1hr 58:44s (7124s) start:	Oct 20 2016 15:13:30.91 (1477001610.91) end:		Oct 20 2016 17:12:15.64 (1477008735.64)</description>
<size>29994697505</size>
</item><item>
<title>PDTI Cloud Help ENTREGA.pdf</title>
<category>Paper</category>
<infohash>b72cfdbd97516f6412a736f3fd2e253d6dcb1f26</infohash>
<guid>https://academictorrents.com/details/b72cfdbd97516f6412a736f3fd2e253d6dcb1f26</guid>
<link>https://academictorrents.com/details/b72cfdbd97516f6412a736f3fd2e253d6dcb1f26</link>
<description>O PDTI é uma ferramenta de gestão que engloba diagnóstico, planejamento e gerenciamento dos recursos e processos de TI a fim de viabilizar melhorias contínuas e potencializar a performance da empresa, possibilitando a definição de prioridades, a inovação, a redução de custos, a otimização de recursos e, principalmente, o desenvolvimento de estratégias para que os objetivos da organização sejam alcançados. O PDTI é um importante aliado do gestor na tomada de decisões, ajudando a mitigar ameaças e a impulsionar oportunidades. As vantagens do PDTI são a viabilização de melhoria contínua de acordo com as melhores práticas, a potencialização da performance organizacional, a redução de custos, a otimização de recursos da área de TI e o direcionamento de investimento para projetos e ações estratégicos. Este PDTI engloba as atividades de governança de TI, politica de qualidade, gestão do conhecimento, sustentabilidade e ética profissional, ambiente web e empreendedorismo da empresa Cloud Help &amp; Projetos.</description>
<size>810698</size>
</item><item>
<title>vgg19_normalized.pkl</title>
<category>Dataset</category>
<infohash>854efbd8e2c085e8e0e5fb2d254ed0e21da6008e</infohash>
<guid>https://academictorrents.com/details/854efbd8e2c085e8e0e5fb2d254ed0e21da6008e</guid>
<link>https://academictorrents.com/details/854efbd8e2c085e8e0e5fb2d254ed0e21da6008e</link>
<description>This is a Python pickle of the parameters for a VGG-16 layer model implemented in Lasagne.</description>
<size>80126892</size>
</item><item>
<title>MNIST Database (mnist.pkl.gz)</title>
<category>Dataset</category>
<infohash>323a0048d87ca79b68f12a6350a57776b6a3b7fb</infohash>
<guid>https://academictorrents.com/details/323a0048d87ca79b68f12a6350a57776b6a3b7fb</guid>
<link>https://academictorrents.com/details/323a0048d87ca79b68f12a6350a57776b6a3b7fb</link>
<description>The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. With some classification methods (particuarly template-based methods, such as SVM and K-nearest neighbors), the error rate improves when the digits are centered by bounding box rather than center of mass. If you do this kind of pre-processing, you should report it in your publications. The MNIST database was constructed from NIST s Special Database 3 and Special Database 1 which contain binary images of handwritten digits. NIST originally designated SD-3 as their training set and SD-1 as their test set. However, SD-3 is much cleaner and easier to recognize than SD-1. The reason for this can be found on the fact that SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students. Drawing sensible conclusions from learning experiments requires that the result be independent of the choice of training set and test among the complete set of samples. Therefore it was necessary to build a new database by mixing NIST s datasets. The MNIST training set is composed of 30,000 patterns from SD-3 and 30,000 patterns from SD-1. Our test set was composed of 5,000 patterns from SD-3 and 5,000 patterns from SD-1. The 60,000 pattern training set contained examples from approximately 250 writers. We made sure that the sets of writers of the training set and test set were disjoint. SD-1 contains 58,527 digit images written by 500 different writers. In contrast to SD-3, where blocks of data from each writer appeared in sequence, the data in SD-1 is scrambled. Writer identities for SD-1 is available and we used this information to unscramble the writers. We then split SD-1 in two: characters written by the first 250 writers went into our new training set. The remaining 250 writers were placed in our test set. Thus we had two sets with nearly 30,000 examples each. The new training set was completed with enough examples from SD-3, starting at pattern # 0, to make a full set of 60,000 training patterns. Similarly, the new test set was completed with SD-3 examples starting at pattern # 35,000 to make a full set with 60,000 test patterns. Only a subset of 10,000 test images (5,000 from SD-1 and 5,000 from SD-3) is available on this site. The full 60,000 sample training set is available. Many methods have been tested with this training set and test set. Here are a few examples. Details about the methods are given in an upcoming paper. Some of those experiments used a version of the database where the input images where deskewed (by computing the principal axis of the shape that is closest to the vertical, and shifting the lines so as to make it vertical). In some other experiments, the training set was augmented with artificially distorted versions of the original training samples. The distortions are random combinations of shifts, scaling, skewing, and compression.</description>
<size>16168813</size>
</item><item>
<title>Udacity Dataset 2-3 Compressed</title>
<category>Dataset</category>
<infohash>1d7fa5116a809b1537bf521fd19897de5d69b7a3</infohash>
<guid>https://academictorrents.com/details/1d7fa5116a809b1537bf521fd19897de5d69b7a3</guid>
<link>https://academictorrents.com/details/1d7fa5116a809b1537bf521fd19897de5d69b7a3</link>
<description>3 hours of daytime driving on highway and city locations. Includes three cameras, can, diagnostic, and other data. We’re Building an Open Source Self-Driving Car, and we want your help! At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too. Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe! https://github.com/udacity/self-driving-car To playback data</description>
<size>20961890219</size>
</item><item>
<title>Udacity Self-Driving Car Driving Data 10/3/2016 (dataset-2-2.bag.tar.gz)</title>
<category>Dataset</category>
<infohash>5ac7e6d434aade126696666417e3b9ed5d078f1c</infohash>
<guid>https://academictorrents.com/details/5ac7e6d434aade126696666417e3b9ed5d078f1c</guid>
<link>https://academictorrents.com/details/5ac7e6d434aade126696666417e3b9ed5d078f1c</link>
<description>| Date      | Lighting Conditions | Duration | Compressed Size | Uncompressed | MD5                              | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| | 10/3/2016 | Overcast            | 58:53    | 124G            | 183G        | 34362e7d997476ed972d475b93b876f3 |</description>
<size>131579367122</size>
</item><item>
<title>Udacity Self-Driving Car Driving Data 9/29/2016 (dataset.bag.tar.gz)</title>
<category>Dataset</category>
<infohash>6011d0e932970efc999809e9cafab8e791c93bb8</infohash>
<guid>https://academictorrents.com/details/6011d0e932970efc999809e9cafab8e791c93bb8</guid>
<link>https://academictorrents.com/details/6011d0e932970efc999809e9cafab8e791c93bb8</link>
<description>| Date      | Lighting Conditions | Duration | Compressed Size | Uncompressed | MD5                              | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| | 9/29/2016 | Sunny               | 12:40    | 25G             | 40G          | 33a10f7835068eeb29b2a3274c216e7d |</description>
<size>25950077103</size>
</item><item>
<title>Udacity Self-Driving Car Dataset 2-2</title>
<category>Dataset</category>
<infohash>bcde779f81adbaae45ef69f9dd07f3e76eab3b27</infohash>
<guid>https://academictorrents.com/details/bcde779f81adbaae45ef69f9dd07f3e76eab3b27</guid>
<link>https://academictorrents.com/details/bcde779f81adbaae45ef69f9dd07f3e76eab3b27</link>
<description>We’re Building an Open Source Self-Driving Car, and we want your help! At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too. Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe! https://github.com/udacity/self-driving-car</description>
<size>7043569975</size>
</item><item>
<title>Udacity Self-Driving Car Dataset 2-1</title>
<category>Dataset</category>
<infohash>f2666220bb74417dfc43815b710a1565cd1a6b76</infohash>
<guid>https://academictorrents.com/details/f2666220bb74417dfc43815b710a1565cd1a6b76</guid>
<link>https://academictorrents.com/details/f2666220bb74417dfc43815b710a1565cd1a6b76</link>
<description>We’re Building an Open Source Self-Driving Car, and we want your help! At Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing and useful subject matter. When we decided to build the Self-Driving Car Nanodegree program, to teach the world to build autonomous vehicles, we instantly knew we had to tackle our own self-driving car too. Together with Google Self-Driving Car founder and Udacity President Sebastian Thrun, we formed our core Self-Driving Car Team. One of the first decisions we made? Open source code, written by hundreds of students from across the globe! https://github.com/udacity/self-driving-car</description>
<size>1637815137</size>
</item><item>
<title>A collection of IRONMAN, IRONMAN 70.3 and Ultra-triathlon race results</title>
<category>Dataset</category>
<infohash>2269d7d1c77375aea732eea0905e370d4741575f</infohash>
<guid>https://academictorrents.com/details/2269d7d1c77375aea732eea0905e370d4741575f</guid>
<link>https://academictorrents.com/details/2269d7d1c77375aea732eea0905e370d4741575f</link>
<description>The presented Technical Report presents a collection of IRONMAN, IRONMAN 70.3 and Ultra-triathlon race results that was scraped from the web. The collection is intended for data mining purposes.</description>
<size>54957988</size>
</item><item>
<title>Coursera - Economics of Money and Banking Part Two</title>
<category>Course</category>
<infohash>62fe3cfdec4d19d1d3bf98184b12538a46b783f5</infohash>
<guid>https://academictorrents.com/details/62fe3cfdec4d19d1d3bf98184b12538a46b783f5</guid>
<link>https://academictorrents.com/details/62fe3cfdec4d19d1d3bf98184b12538a46b783f5</link>
<description>The last three or four decades have seen a remarkable evolution in the institutions that comprise the modern monetary system. The financial crisis of 2007-2009 is a wakeup call that we need a similar evolution in the analytical apparatus and theories that we use to understand that system. Produced and sponsored by the Institute for New Economic Thinking, this course is an attempt to begin the process of new economic thinking by reviving and updating some forgotten traditions in monetary thought that have become newly relevant. Three features of the new system are central. Most important, the intertwining of previously separate capital markets and money markets has produced a system with new dynamics as well as new vulnerabilities. The financial crisis revealed those vulnerabilities for all to see. The result was two years of desperate innovation by central banking authorities as they tried first this, and then that, in an effort to stem the collapse. Second, the global character of the crisis has revealed the global character of the system, which is something new in postwar history but not at all new from a longer time perspective. Central bank cooperation was key to stemming the collapse, and the details of that cooperation hint at the outlines of an emerging new international monetary order. Third, absolutely central to the crisis was the operation of key derivative contracts, most importantly credit default swaps and foreign exchange swaps. Modern money cannot be understood separately from modern finance, nor can modern monetary theory be constructed separately from modern financial theory. That s the reason this course places dealers, in both capital markets and money markets, at the very center of the picture, as profit-seeking suppliers of market liquidity to the new system of market-based credit.</description>
<size>3439646754</size>
</item><item>
<title>Coursera - Economics of Money and Banking Part One</title>
<category>Course</category>
<infohash>970f4ee32d1a49168466a517b3dcd0442b043abc</infohash>
<guid>https://academictorrents.com/details/970f4ee32d1a49168466a517b3dcd0442b043abc</guid>
<link>https://academictorrents.com/details/970f4ee32d1a49168466a517b3dcd0442b043abc</link>
<description>The last three or four decades have seen a remarkable evolution in the institutions that comprise the modern monetary system. The financial crisis of 2007-2009 is a wakeup call that we need a similar evolution in the analytical apparatus and theories that we use to understand that system. Produced and sponsored by the Institute for New Economic Thinking, this course is an attempt to begin the process of new economic thinking by reviving and updating some forgotten traditions in monetary thought that have become newly relevant. Three features of the new system are central. Most important, the intertwining of previously separate capital markets and money markets has produced a system with new dynamics as well as new vulnerabilities. The financial crisis revealed those vulnerabilities for all to see. The result was two years of desperate innovation by central banking authorities as they tried first this, and then that, in an effort to stem the collapse. Second, the global character of the crisis has revealed the global character of the system, which is something new in postwar history but not at all new from a longer time perspective. Central bank cooperation was key to stemming the collapse, and the details of that cooperation hint at the outlines of an emerging new international monetary order. Third, absolutely central to the crisis was the operation of key derivative contracts, most importantly credit default swaps and foreign exchange swaps. Modern money cannot be understood separately from modern finance, nor can modern monetary theory be constructed separately from modern financial theory. That s the reason this course places dealers, in both capital markets and money markets, at the very center of the picture, as profit-seeking suppliers of market liquidity to the new system of market-based credit.</description>
<size>2343104530</size>
</item><item>
<title>[Coursera] Analytic Combinatorics</title>
<category>Course</category>
<infohash>af6745de427de294a32cfee3ceae5b8aba909500</infohash>
<guid>https://academictorrents.com/details/af6745de427de294a32cfee3ceae5b8aba909500</guid>
<link>https://academictorrents.com/details/af6745de427de294a32cfee3ceae5b8aba909500</link>
<description>Analytic Combinatorics teaches a calculus that enables precise quantitative predictions of large combinatorial structures. This course introduces the symbolic method to derive functional relations among ordinary, exponential, and multivariate generating functions, and methods in complex analysis for deriving accurate asymptotics from the GF equations. Analytic Combinatorics is based on formal methods for deriving functional relationships on generating functions and asymptotic analysis treating those functions as functions in the complex plane. This course covers the symbolic method for defining generating functions immediately from combinatorial constructions, then develops methods for directly deriving asymptotic results from those generating functions, using complex asymptotics, singularity analysis, saddle-point asymptotics, and limit laws. The course teaches the precept "if you can specify it, you can analyze it".</description>
<size>1592350714</size>
</item><item>
<title>[Coursera] Clinical Problem Solving</title>
<category>Course</category>
<infohash>dae02888e2fb6484a7b471cb7977eb859aba4831</infohash>
<guid>https://academictorrents.com/details/dae02888e2fb6484a7b471cb7977eb859aba4831</guid>
<link>https://academictorrents.com/details/dae02888e2fb6484a7b471cb7977eb859aba4831</link>
<description>Participants will learn how to move efficiently from patient signs and symptoms to a rational and prioritized set of diagnostic possibilities and will learn how to study and read to facilitate this process. Clinical problem solving or diagnostic reasoning is the skill that physicians use to understand a patient’s complaints and then to identify a short, prioritized list of possible diagnoses that could account for those complaints. This differential diagnosis then drives the choice of diagnostic tests and possible treatments. Despite striking advances in information technology, clinical problem solving has not yet been effectively replicated by computers, making it essential that clinicians work to develop expertise in this very important skill set. This course will examine the ways physicians think about clinical problem solving and will help participants develop competence in the building blocks of clinical problem solving. The professor will use cases to illustrate different reasoning strategies and will discuss how both correct and incorrect diagnoses result from these strategies. Participants will use sample clinical cases to practice what they have learned through the lectures. Finally, the professor will discuss strategies to help students and young physicians read textbooks and articles in a way that enhances their ability to use information in the clinical environment.</description>
<size>1203451655</size>
</item><item>
<title>[Coursera] Statistics: Making Sense of Data</title>
<category>Course</category>
<infohash>a0cbaf3e03e0893085b6fbdc97cb6220896dddf2</infohash>
<guid>https://academictorrents.com/details/a0cbaf3e03e0893085b6fbdc97cb6220896dddf2</guid>
<link>https://academictorrents.com/details/a0cbaf3e03e0893085b6fbdc97cb6220896dddf2</link>
<description>This course is an introduction to the key ideas and principles of the collection, display, and analysis of data to guide you in making valid and appropriate conclusions about the world. We live in a world where data are increasingly available, in ever larger quantities, and are increasingly expected to form the basis for decisions by governments, businesses, and other organizations, as well as by individuals in their daily lives. To cope effectively, every informed citizen must be statistically literate. This course will provide an intuitive introduction to applied statistical reasoning, introducing fundamental statistical skills and acquainting students with the full process of inquiry and evaluation used in investigations in a wide range of fields.</description>
<size>840871302</size>
</item><item>
<title>[Coursera] Probabilistic Graphical Models</title>
<category>Course</category>
<infohash>e74f08f0fc699e84a9eb046309727d07d80171c5</infohash>
<guid>https://academictorrents.com/details/e74f08f0fc699e84a9eb046309727d07d80171c5</guid>
<link>https://academictorrents.com/details/e74f08f0fc699e84a9eb046309727d07d80171c5</link>
<description>In this class, you will learn the basics of the PGM representation and how to construct them, using both human knowledge and machine learning techniques. Uncertainty is unavoidable in real-world applications: we can almost never predict with certainty what will happen in the future, and even in the present and the past, many important aspects of the world are not observed with certainty. Probability theory gives us the basic foundation to model our beliefs about the different possible states of the world, and to update these beliefs as new evidence is obtained. These beliefs can be combined with individual preferences to help guide our actions, and even in selecting which observations to make. While probability theory has existed since the 17th century, our ability to use it effectively on large problems involving many inter-related variables is fairly recent, and is due largely to the development of a framework known as Probabilistic Graphical Models (PGMs). This framework, which spans methods such as Bayesian networks and Markov random fields, uses ideas from discrete data structures in computer science to efficiently encode and manipulate probability distributions over high-dimensional spaces, often involving hundreds or even many thousands of variables. These methods have been used in an enormous range of application domains, which include: web search, medical and fault diagnosis, image understanding, reconstruction of biological networks, speech recognition, natural language processing, decoding of messages sent over a noisy communication channel, robot navigation, and many more. The PGM framework provides an essential tool for anyone who wants to learn how to reason coherently from limited and noisy observations.</description>
<size>1503615787</size>
</item><item>
<title>[Coursera] Exploring Quantum Physics</title>
<category>Course</category>
<infohash>f24122f15283757aa8a9bf9cb638db266273442d</infohash>
<guid>https://academictorrents.com/details/f24122f15283757aa8a9bf9cb638db266273442d</guid>
<link>https://academictorrents.com/details/f24122f15283757aa8a9bf9cb638db266273442d</link>
<description>An introduction to quantum physics with emphasis on topics at the frontiers of research, and developing understanding through exercise. Quantum physics is the foundation for much of modern technology, provides the framework for understanding light and matter from the subatomic to macroscopic domains, and makes possible the most precise measurements ever made. More than just a theory, it offers a way of looking at the world that grows richer with experience and practice. Our course will provide some of that practice and teach you "tricks of the trade" (not found in textbooks) that will enable you to solve quantum-mechanical problems yourself and understand the subject at a deeper level. The basic principles of quantum physics are actually quite simple, but they lead to astonishing outcomes. Two examples that we will look at from various perspectives are the prediction of the laser by Albert Einstein in 1917 and the prediction of antimatter by Paul Dirac in 1928. Both of these predictions came from very simple arguments in quantum theory, and led to results that transformed science and society. Another familiar phenomenon, magnetism, had been known since antiquity, but only with the advent of quantum physics was it understood how magnets worked, to a degree that made possible the discovery in the 1980’s of ultrastrong rare-earth magnets. However, lasers, antimatter and magnets are areas of vibrant research, and they are all encountered in the new field of ultracold atomic physics that will provide much of the material of “Exploring Quantum Physics”. Richard Feynman once said, “I think I can safely say that nobody understands quantum mechanics.” We say, that’s no reason not to try! What Feynman was referring to are some of the “spooky” phenomena like quantum entanglement, which are incomprehensible from the standpoint of classical physics. Even though they have been thoroughly tested by experiment, and are even being exploited for applications such as cryptography and logic processing, they still seem so counterintuitive that they give rise to extraordinary ideas such as the many-world theory. Quantum physics combines a spectacular record of discovery and predictive success, with foundational perplexities so severe that even Albert Einstein came to believe that it was wrong. This is what makes it such an exciting area of science!</description>
<size>1644349572</size>
</item><item>
<title>[Coursera] Bitcoin and Cryptocurrency Technologies</title>
<category>Course</category>
<infohash>409f4cdb063940f2210acbbecca09ca6ad8b3032</infohash>
<guid>https://academictorrents.com/details/409f4cdb063940f2210acbbecca09ca6ad8b3032</guid>
<link>https://academictorrents.com/details/409f4cdb063940f2210acbbecca09ca6ad8b3032</link>
<description>There’s a lot of excitement about Bitcoin, but also a lot of confusion about what Bitcoin is and how it works. We’re offering this course focusing on the computer science behind Bitcoin to help cut through the hype and get to the core of what makes Bitcoin unique. To really understand what is special about Bitcoin, we need to understand how it works at a technical level. We’ll address the important questions about Bitcoin, such as: How does Bitcoin work? What makes Bitcoin different? How secure are your Bitcoins? How anonymous are Bitcoin users? What determines the price of Bitcoins? Can cryptocurrencies be regulated? What might the future hold? After this course, you’ll know everything you need to be able to separate fact from fiction when reading claims about Bitcoin and other cryptocurrencies. You’ll have the conceptual foundations you need to engineer secure software that interacts with the Bitcoin network. And you’ll be able to integrate ideas from Bitcoin in your own projects.</description>
<size>5125464116</size>
</item><item>
<title>[Coursera] Heterogeneous Parallel Programming</title>
<category>Course</category>
<infohash>8903d0871c652b96c7b29db738cea76902d65888</infohash>
<guid>https://academictorrents.com/details/8903d0871c652b96c7b29db738cea76902d65888</guid>
<link>https://academictorrents.com/details/8903d0871c652b96c7b29db738cea76902d65888</link>
<description>This course introduces concepts, languages, techniques, and patterns for programming heterogeneous, massively parallel processors. Its contents and structure have been significantly revised based on the experience gained from its initial offering in 2012. It covers heterogeneous computing architectures, data-parallel programming models, techniques for memory bandwidth management, and parallel algorithm patterns. All computing systems, from mobile to supercomputers, are becoming heterogeneous, massively parallel computers for higher power efficiency and computation throughput. While the computing community is racing to build tools and libraries to ease the use of these systems, effective and confident use of these systems will always require knowledge about low-level programming in these systems. This course is designed for students to learn the essence of low-level programming interfaces and how to use these interfaces to achieve application goals. CUDA C, with its good balance between user control and verboseness, will serve as the teaching vehicle for the first half of the course. Students will then extend their learning into closely related programming interfaces such as OpenCL, OpenACC, and C++AMP. The course is unique in that it is application oriented and only introduces the necessary underlying computer science and computer engineering knowledge for understanding. It covers the concept of data parallel execution models, memory models for managing locality, tiling techniques for reducing bandwidth consumption, parallel algorithm patterns, overlapping computation with communication, and a variety of heterogeneous parallel programming interfaces. The concepts learned in this course form a strong foundation for learning other types of parallel programming systems.</description>
<size>4906095314</size>
</item><item>
<title>[Coursera] Designing and Executing Information Security Strategies</title>
<category>Course</category>
<infohash>55284a002672598923af36bc55f3205b42c93b00</infohash>
<guid>https://academictorrents.com/details/55284a002672598923af36bc55f3205b42c93b00</guid>
<link>https://academictorrents.com/details/55284a002672598923af36bc55f3205b42c93b00</link>
<description>This course provides you with opportunities to integrate and apply your information security knowledge. Following the case-study approach, you will be introduced to current, real-world cases developed and presented by the practitioner community. You will design and execute information assurance strategies to solve these cases.</description>
<size>1007286635</size>
</item><item>
<title>[Coursera] Coding the Matrix: Linear Algebra through Computer Science Applications</title>
<category>Course</category>
<infohash>54cd86f3038dfd446b037891406ba4e0b1200d5a</infohash>
<guid>https://academictorrents.com/details/54cd86f3038dfd446b037891406ba4e0b1200d5a</guid>
<link>https://academictorrents.com/details/54cd86f3038dfd446b037891406ba4e0b1200d5a</link>
<description>When you take a digital photo with your phone or transform the image in Photoshop, when you play a video game or watch a movie with digital effects, when you do a web search or make a phone call, you are using technologies that build upon linear algebra. Linear algebra provides concepts that are crucial to many areas of computer science, including graphics, image processing, cryptography, machine learning, computer vision, optimization, graph algorithms, quantum computation, computational biology, information retrieval and web search. Linear algebra in turn is built on two basic elements, the matrix and the vector. In this class, you will learn the concepts and methods of linear algebra, and how to use them to think about problems arising in computer science. You will write small programs in the programming language Python to implement basic matrix and vector functionality and algorithms, and use these to process real-world data to achieve such tasks as: two-dimensional graphics transformations, face morphing, face detection, image transformations such as blurring and edge detection, image perspective removal, audio and image compression, searching within an image or an audio clip, classification of tumors as malignant or benign, integer factorization, error-correcting codes, secret-sharing, network layout, document classification, and computing Pagerank (Google s ranking method).</description>
<size>2978388394</size>
</item><item>
<title>[Coursera] The Hardware/Software Interface</title>
<category>Course</category>
<infohash>f1384286c8581bffba11e378fdb37608e649d82a</infohash>
<guid>https://academictorrents.com/details/f1384286c8581bffba11e378fdb37608e649d82a</guid>
<link>https://academictorrents.com/details/f1384286c8581bffba11e378fdb37608e649d82a</link>
<description>Examines key computational abstraction levels below modern high-level languages. From Java/C to assembly programming, to basic processor and system organization. This course examines key computational abstraction levels below modern high-level languages; number representation, assembly language, introduction to C, memory management, the operating-system process model, high-level machine architecture including the memory hierarchy, and how high-level languages are implemented. We will develop students’ sense of “what really happens” when software runs — and that this question can be answered at several levels of abstraction, including the hardware architecture level, the assembly level, the C programming level and the Java programming level. The core around which the course is built is C, assembly, and low-level data representation, but this is connected to higher levels (roughly how basic Java could be implemented), lower levels (the general structure of a processor and the memory hierarchy), and the role of the operating system (but not how the operating system is implemented).</description>
<size>1311351354</size>
</item><item>
<title>[Coursera] Computer Architecture </title>
<category>Course</category>
<infohash>53bae6d22f3b6e692673f9335e0a0198c1618426</infohash>
<guid>https://academictorrents.com/details/53bae6d22f3b6e692673f9335e0a0198c1618426</guid>
<link>https://academictorrents.com/details/53bae6d22f3b6e692673f9335e0a0198c1618426</link>
<description>About this course: In this course, you will learn to design the computer architecture of complex modern microprocessors. ### Introduction, Instruction Set Architecture, and Microcode This lecture will give you a broad overview of the course, as well as the description of architecture, micro-architecture and instruction set architectures. ### Pipelining Review This lecture covers the basic concept of pipeline and two different types of hazards. ### Cache Review This lecture covers control hazards and the motivation for caches. ### Superscalar 1 This lecture covers cache characteristics and basic superscalar architecture. ### Superscalar 2 &amp; Exceptions This lecture covers the common issues for superscalar architecture. ### Superscalar 3 This lecture covers different kinds of architectures for out-of-order processors. ### Superscalar 4 This lecture covers the common methods used to improve the performance of out-of-order processors including register renaming and memory disambiguation. ### VLIW 1 This lecture covers the basic concept of very long instruction word (VLIW) processors. ### VLIW2 This lecture covers the common methods used to improve VLIW performance. ### Branch Prediction This lecture covers the motivation and implementation of branch predictors. ### Advanced Caches 1 This lecture covers the advanced mechanisms used to improve cache performance. ### Advanced Caches 2 This lecture covers more advanced mechanisms used to improve cache performance. ### Memory Protection This lecture covers memory management and protection. ### Vector Processors and GPUs This lecture covers the vector processor and optimizations for vector processors. ### Multithreading This lecture covers different types of multithreading. ### Parallel Programming 1 This lecture covers the concepts of parallelism, consistency models, and basic parallel programming techniques. ### Parallel Programming 2 This lecture covers the solutions for the consistency problem in parallel programming. ### Small Multiprocessors This lecture covers the implementation of small multiprocessors. ### Multiprocessor Interconnect 1 This lecture covers the design of interconnects for a multiprocessor. ### Multiprocessor Interconnect 2 This lecture covers the design of interconnects for multiprocessor and network topology. ### Large Multiprocessors (Directory Protocols) This lecture covers the motivation and implementation of directory protocol used for coherence on large multiproccesors.</description>
<size>5558550001</size>
</item><item>
<title>[Coursera] Natural Language Processing</title>
<category>Course</category>
<infohash>f99e7184fca947ee8f77901679e171fcadbf82e7</infohash>
<guid>https://academictorrents.com/details/f99e7184fca947ee8f77901679e171fcadbf82e7</guid>
<link>https://academictorrents.com/details/f99e7184fca947ee8f77901679e171fcadbf82e7</link>
<description>Course Description: COMS W4705 is a graduate introduction to natural language processing, the study of human language from a computational perspective. We will cover syntactic, semantic and discourse processing models. The emphasis will be on machine learning or corpus-based methods and algorithms. We will describe the use of these methods and models in applications including syntactic parsing, information extraction, statistical machine translation, dialogue systems, and summarization. Syllabus: Here is a tentative syllabus for class: Introduction Estimation techniques, and language modeling Tagging, hidden Markov models Statistical parsing Machine translation Log-linear models Conditional random fields, global linear models Unsupervised and semi-supervised learning in NLP</description>
<size>1204375412</size>
</item><item>
<title>[Coursera] Learn to Program: Crafting Quality Code</title>
<category>Course</category>
<infohash>5d940b05a2097b5fcf2392916e6c4901743fb219</infohash>
<guid>https://academictorrents.com/details/5d940b05a2097b5fcf2392916e6c4901743fb219</guid>
<link>https://academictorrents.com/details/5d940b05a2097b5fcf2392916e6c4901743fb219</link>
<description>About this course: Not all programs are created equal.  In this course, we ll focus on writing quality code that runs correctly and efficiently.  We ll design, code and validate our programs and learn how to compare programs that are addressing the same task.</description>
<size>360725589</size>
</item><item>
<title>[Coursera] Introduction to Mathematical Thinking</title>
<category>Course</category>
<infohash>2b5e5cc8c7414bc3b0f6974190065bc8c2f629dc</infohash>
<guid>https://academictorrents.com/details/2b5e5cc8c7414bc3b0f6974190065bc8c2f629dc</guid>
<link>https://academictorrents.com/details/2b5e5cc8c7414bc3b0f6974190065bc8c2f629dc</link>
<description>About this course: Learn how to think the way mathematicians do - a powerful cognitive process developed over thousands of years. The goal of the course is to help you develop a valuable mental ability – a powerful way of thinking that our ancestors have developed over three thousand years. Mathematical thinking is not the same as doing mathematics – at least not as mathematics is typically presented in our school system. School math typically focuses on learning procedures to solve highly stereotyped problems. Professional mathematicians think a certain way to solve real problems, problems that can arise from the everyday world, or from science, or from within mathematics itself. The key to success in school math is to learn to think inside-the-box. In contrast, a key feature of mathematical thinking is thinking outside-the-box – a valuable ability in today’s world. This course helps to develop that crucial way of thinking. The course is offered in two versions. The eight-week-long Basic Course is designed for people who want to develop or improve mathematics-based, analytic thinking for professional or general life purposes. The ten-week-long Extended Course is aimed primarily at first-year students at college or university who are thinking of majoring in mathematics or a mathematically-dependent subject, or high school seniors who have such a college career in mind. The final two weeks are more intensive and require more mathematical background than the Basic Course. There is no need to make a formal election between the two. Simply skip or drop out of the final two weeks if you decide you want to complete only the Basic Course.</description>
<size>1902737188</size>
</item><item>
<title>[Coursera] Learn to Program: The Fundamentals</title>
<category>Course</category>
<infohash>42a92c2f8f4607601ed42e473e45255f5122f446</infohash>
<guid>https://academictorrents.com/details/42a92c2f8f4607601ed42e473e45255f5122f446</guid>
<link>https://academictorrents.com/details/42a92c2f8f4607601ed42e473e45255f5122f446</link>
<description>About this course: Behind every mouse click and touch-screen tap, there is a computer program that makes things happen. This course introduces the fundamental building blocks of programming and teaches you how to write fun and useful programs using the Python language. Who is this class for: This course is primarily aimed at first-year university students and high school students who want to learn how to program.</description>
<size>1636097345</size>
</item><item>
<title>[Coursera] Compilers</title>
<category>Course</category>
<infohash>e31e54905c7b2669c81fe164de2859be4697013a</infohash>
<guid>https://academictorrents.com/details/e31e54905c7b2669c81fe164de2859be4697013a</guid>
<link>https://academictorrents.com/details/e31e54905c7b2669c81fe164de2859be4697013a</link>
<description>This course will discuss the major ideas used today in the implementation of programming language compilers. You will learn how a program written in a high-level language designed for humans is systematically translated into a program written in low-level assembly more suited to machines!</description>
<size>1234738985</size>
</item><item>
<title>[Coursera] Algorithms Part II </title>
<category>Course</category>
<infohash>7afeafb540f4ff63690f1a6517748341f6809516</infohash>
<guid>https://academictorrents.com/details/7afeafb540f4ff63690f1a6517748341f6809516</guid>
<link>https://academictorrents.com/details/7afeafb540f4ff63690f1a6517748341f6809516</link>
<description>About this course: This course covers the essential information that every serious programmer needs to know about algorithms and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Part I covers elementary data structures, sorting, and searching algorithms. Part II focuses on graph- and string-processing algorithms.</description>
<size>1925417068</size>
</item><item>
<title>[Coursera] Algorithms Part I</title>
<category>Course</category>
<infohash>a2934d859a14c07a80092ab03552310838f66590</infohash>
<guid>https://academictorrents.com/details/a2934d859a14c07a80092ab03552310838f66590</guid>
<link>https://academictorrents.com/details/a2934d859a14c07a80092ab03552310838f66590</link>
<description>About this course: This course covers the essential information that every serious programmer needs to know about algorithms and data structures, with emphasis on applications and scientific performance analysis of Java implementations. Part I covers elementary data structures, sorting, and searching algorithms. Part II focuses on graph- and string-processing algorithms. ## Union−Find We illustrate our basic approach to developing and analyzing algorithms by considering the dynamic connectivity problem. We introduce the union−find data type and consider several implementations (quick find, quick union, weighted quick union, and weighted quick union with path compression). Finally, we apply the union−find data type to the percolation problem from physical chemistry. ## Analysis of Algorithms The basis of our approach for analyzing the performance of algorithms is the scientific method. We begin by performing computational experiments to measure the running times of our programs. We use these measurements to develop hypotheses about performance. Next, we create mathematical models to explain their behavior. Finally, we consider analyzing the memory usage of our Java programs. ## Stacks and Queues We consider two fundamental data types for storing collections of objects: the stack and the queue. We implement each using either a singly-linked list or a resizing array. We introduce two advanced Java features—generics and iterators—that simplify client code. Finally, we consider various applications of stacks and queues ranging from parsing arithmetic expressions to simulating queueing systems. ## Elementary Sorts We introduce the sorting problem and Java s Comparable interface. We study two elementary sorting methods (selection sort and insertion sort) and a variation of one of them (shellsort). We also consider two algorithms for uniformly shuffling an array. We conclude with an application of sorting to computing the convex hull via the Graham scan algorithm. ## Mergesort We study the mergesort algorithm and show that it guarantees to sort any array of n items with at most n lg n compares. We also consider a nonrecursive, bottom-up version. We prove that any compare-based sorting algorithm must make at least n lg n compares in the worst case. We discuss using different orderings for the objects that we are sorting and the related concept of stability. ## Quicksort We introduce and implement the randomized quicksort algorithm and analyze its performance. We also consider randomized quickselect, a quicksort variant which finds the kth smallest item in linear time. Finally, we consider 3-way quicksort, a variant of quicksort that works especially well in the presence of duplicate keys. ## Priority Queues We introduce the priority queue data type and an efficient implementation using the binary heap data structure. This implementation also leads to an efficient sorting algorithm known as heapsort. We conclude with an applications of priority queues where we simulate the motion of n particles subject to the laws of elastic collision. ## Elementary Symbol Tables We define an API for symbol tables (also known as associative arrays) and describe two elementary implementations using a sorted array (binary search) and an unordered list (sequential search). When the keys are Comparable, we define an extended API that includes the additional methods min, max floor, ceiling, rank, and select. To develop an efficient implementation of this API, we study the binary search tree data structure and analyze its performance. ## Balanced Search Trees In this lecture, our goal is to develop a symbol table with guaranteed logarithmic performance for search and insert (and many other operations). We begin with 2−3 trees, which are easy to analyze but hard to implement. Next, we consider red−black binary search trees, which we view as a novel way to implement 2−3 trees as binary search trees. Finally, we introduce B-trees, a generalization of 2−3 trees that are widely used to implement file systems. ## Geometric Applications of BSTs We start with 1d and 2d range searching, where the goal is to find all points in a given 1d or 2d interval. To accomplish this, we consider kd-trees, a natural generalization of BSTs when the keys are points in the plane (or higher dimensions). We also consider intersection problems, where the goal is to find all intersections among a set of line segments or rectangles. ## Hash Tables We begin by describing the desirable properties of hash function and how to implement them in Java, including a fundamental tenet known as the uniform hashing assumption that underlies the potential success of a hashing application. Then, we consider two strategies for implementing hash tables—separate chaining and linear probing. Both strategies yield constant-time performance for search and insert under the uniform hashing assumption. ## Symbol Table Applications We consider various applications of symbol tables including sets, dictionary clients, indexing clients, and sparse vectors.</description>
<size>1622615266</size>
</item><item>
<title>Modified PubMed Dataset used by WSU-IR team at TREC 2015 Clinical Decision Support Track</title>
<category>Dataset</category>
<infohash>371a9244d2e9344a196a449f898e0a4385b6b43a</infohash>
<guid>https://academictorrents.com/details/371a9244d2e9344a196a449f898e0a4385b6b43a</guid>
<link>https://academictorrents.com/details/371a9244d2e9344a196a449f898e0a4385b6b43a</link>
<description>The corresponding paper to this dataset describes participation of WSU-IR group in TREC 2015 Clinical Decision Support (CDS) track. We present a Markov Random Fields-based retrieval model and an optimization method for jointly weighting statistical and semantic unigram, bigram and multi-phrase concepts from the query and PRF documents as well as three specific instantiations of this model that we used to obtain the runs submitted for each task in this track. These instantiations consider different types of concepts and use different parts of topics as queries.</description>
<size>18533878946</size>
</item><item>
<title>UCSD Pedestrian Database</title>
<category>Dataset</category>
<infohash>fed43599b7e8e0a0fbe1e22062cdb54d36cf951d</infohash>
<guid>https://academictorrents.com/details/fed43599b7e8e0a0fbe1e22062cdb54d36cf951d</guid>
<link>https://academictorrents.com/details/fed43599b7e8e0a0fbe1e22062cdb54d36cf951d</link>
<description>This is the UCSD pedestrian database used in “Modeling, Clustering, and Segmenting Video with Mixtures of Dynamic Textures” ## Database Format The database contains video of pedestrians on UCSD walkways, taken from a stationary camera. All videos are 8-bit grayscale, with dimensions 238 × 158 at 10 fps. The database is split into scenes, taken from different viewpoints (currently, only one scene is available...more are coming). Each scene is in its own directory vidX where X is a letter (e.g. vidf), and is split into video clips of length 200 named vidfXY 33 ZZZ.y, where Y and ZZZ are numbers. Finally, each video clip is saved as a set of .png files. Examples from each scene are presented in Figure 1. If you use this database, please reference. ![](https://i.imgur.com/VTk15Rx.png)</description>
<size>791870738</size>
</item><item>
<title>Gland Segmentation in Histology Images Challenge (GlaS) Dataset</title>
<category>Dataset</category>
<infohash>208814dd113c2b0a242e74e832ccac28fcff74e5</infohash>
<guid>https://academictorrents.com/details/208814dd113c2b0a242e74e832ccac28fcff74e5</guid>
<link>https://academictorrents.com/details/208814dd113c2b0a242e74e832ccac28fcff74e5</link>
<description>![](http://www2.warwick.ac.uk/fac/sci/dcs/research/combi/research/bic/glascontest/glas3resize2.png) "We aim to bring together researchers who are interested in the gland segmentation problem, to validate the performance of their existing or newly invented algorithms on the same standard dataset. In this challenge, we will provide the participants with images of Haematoxylin and Eosin (H&amp;E) stained slides, consisting of a wide range of histologic grades." ![](https://i.imgur.com/GzSJCu4.png) ## Introduction Glands are important histological structures which are present in most organ systems as the main mechanism for secreting proteins and carbohydrates. It has been shown that malignant tumours arising from glandular epithelium, also known as adenocarcinomas, are the most prevalent form of cancer. The morphology of glands has been used routinely by pathologists to assess the degree of malignancy of several adenocarcinomas, including prostate, breast, lung, and colon. Accurate segmentation of glands is often a crucial step to obtain reliable morphological statistics. Nonetheless, the task by nature is very challenging due to the great variation of glandular morphology in different histologic grades. Up until now, the majority of studies focus on gland segmentation in healthy or benign samples, but rarely on intermediate or high grade cancer, and quite often, they are optimised to specific datasets. In this challenge, participants are encouraged to run their gland segmentation algorithms on images of Hematoxylin and Eosin (H&amp;E) stained slides, consisting of a variety of histologic grades. The dataset is provided together with ground truth annotations by expert pathologists. The participants are asked to develop and optimise their algorithms on the provided training dataset, and validate their algorithm on the test dataset. ## Data Description The challenge will be conducted on a dataset, acquired by a team of pathologists at the University Hospitals Coventry and Warwickshire, UK. Details of the dataset are as follows. | Dataset                    | Warwick-QU | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | Cancer Type                |  Colorectal Cancer          | | Resolution/                |    20X (0.62005 \mum/pixel)        | | Scanner 	|      Zeiss MIRAX MIDI          | | Number of Images                  |     165       | | Format                     |   bmp       | The composition of the dataset is as follows. | Split    | Warwick-QU                 | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | Training | benign : 37 malignant : 48 | | Test     | benign : 37 malignant : 43 | The ground truth for each image in the training dataset is stored in a BMP file, one ground truth object per label. ## Challenge Tasks After registration, the team will receive a username and password for downloading the training datasets. Each team are asked to submit a short paper, which includes a description of their segmentation algorithm and some preliminary results on the training dataset. See submission section for more details. Teams that have submitted a short paper will be invited to present their work at GlaS challenge at MICCAI 2015. The test dataset will be made available upon the acceptance of your invitation. The organisers will evaluate the performance of a segmentation algorithm based on the test datasets and announce the final competition result at the GlaS Challenge event.</description>
<size>180902609</size>
</item><item>
<title>The Cars Overhead With Context (COWC)</title>
<category>Dataset</category>
<infohash>210dfc51f11dcfced602ad226962b7590e08c50a</infohash>
<guid>https://academictorrents.com/details/210dfc51f11dcfced602ad226962b7590e08c50a</guid>
<link>https://academictorrents.com/details/210dfc51f11dcfced602ad226962b7590e08c50a</link>
<description>The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars. More information can be obtained by reading our paper here. The dataset has the following attributes: (1) Data from overhead at 15 cm per pixel resolution at ground (all data is EO). (2) Data from six distinct locations: Toronto Canada, Selwyn New Zealand, Potsdam and Vaihingen Germany, Columbus and Utah United States. (3) 32,716 unique annotated cars. 58,247 unique negative examples. (4) Intentional selection of hard negative examples. (5) Established baseline for detection and counting tasks. (6) Extra testing scenes for use after validation. Data can be downloaded from our FTP server. The data includes wide area imagery with annotations as well as precompiled image sets for training/validation of classification and counting. Examples of the precompiled image sets are seen on the right. The dataset and research to create this data was done by members of the Computer Vision group within the Computation Engineering Division at Lawrence Livermore National Laboratory under grant from NA-22 in the Global Security Directorate. No Llamas were harmed in the creation of this set. ![](https://i.imgur.com/0dsvDo0.jpg)</description>
<size>9338326203</size>
</item><item>
<title>Water microdroplet dataset</title>
<category>Dataset</category>
<infohash>a8d14f22c9ce1cc59c9f480df5deb0f7e94861f4</infohash>
<guid>https://academictorrents.com/details/a8d14f22c9ce1cc59c9f480df5deb0f7e94861f4</guid>
<link>https://academictorrents.com/details/a8d14f22c9ce1cc59c9f480df5deb0f7e94861f4</link>
<description>Complete dataset of the imaging flow cytometry of the water in silicone oil microdroplets acquired by ultrafast optical time-stretch microscopy technique. Published in: Antony C. S. Chan, Ho-Cheung Ng, Sharat C. V. Bogaraju, Hayden K. H. So, Edmund Y. Lam &amp; Kevin K. Tsia, "All-passive pixel super-resolution of time-stretch imaging" Scientific Reports 7, 44608 (2017) http://dx.doi.org/10.1038/srep44608 Preprint: https://arxiv.org/abs/1610.05802</description>
<size>95152553</size>
</item><item>
<title>Scendesmus dataset</title>
<category>Dataset</category>
<infohash>338a9fb90e5dccda4106d623768b6d40f3956ab0</infohash>
<guid>https://academictorrents.com/details/338a9fb90e5dccda4106d623768b6d40f3956ab0</guid>
<link>https://academictorrents.com/details/338a9fb90e5dccda4106d623768b6d40f3956ab0</link>
<description>Complete dataset of the imaging flow cytometry of the ptyphoplankton (species: scenedesmus) cell colonies acquired by ultrafast optical time-stretch microscopy technique. Published in: Antony C. S. Chan, Ho-Cheung Ng, Sharat C. V. Bogaraju, Hayden K. H. So, Edmund Y. Lam &amp; Kevin K. Tsia, "All-passive pixel super-resolution of time-stretch imaging" Scientific Reports 7, 44608 (2017) http://dx.doi.org/10.1038/srep44608 Preprint: https://arxiv.org/abs/1610.05802 Data hierachy:</description>
<size>589623527</size>
</item><item>
<title>10-715 Advanced Introduction to Machine Learning - CMU - Fall 2015</title>
<category>Course</category>
<infohash>8165e591be4b54c2bc93b1e54c4374d42dcdd9f8</infohash>
<guid>https://academictorrents.com/details/8165e591be4b54c2bc93b1e54c4374d42dcdd9f8</guid>
<link>https://academictorrents.com/details/8165e591be4b54c2bc93b1e54c4374d42dcdd9f8</link>
<description>The rapid improvement of sensory techniques and processor speed, and the availability of inexpensive massive digital storage, have led to a growing demand for systems that can automatically comprehend and mine massive and complex data from diverse sources. Machine Learning is becoming the primary mechanism by which information is extracted from Big Data, and a primary pillar that Artificial Intelligence is built upon. This course is designed for Ph.D. students whose primary field of study is machine learning, or who intend to make machine learning methodological research a main focus of their thesis. It will give students a thorough grounding in the algorithms, mathematics, theories, and insights needed to do in-depth research and applications in machine learning. The topics of this course will in part parallel those covered in the general graduate machine learning course (10-701), but with a greater emphasis on depth in theory and algorithms. The course will also include additional advanced topics such as RKHS and representer theory, Bayesian nonparametrics, additional material on graphical models, manifolds and spectral graph theory, reinforcement learning and online learning, etc. Students entering the class are expected to have a pre-existing strong working knowledge of algorithms, linear algebra, probability, and statistics. If you are interested in this topic, but do not have the required background or are not planning to work on a PhD thesis with machine learning as the main focus, you might consider the general graduate Machine Learning course (10-701) or the Masters-level Machine Learning course (10-601). | Lecture | Block  | Topic                | Lecturer                      |                                                              |          | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1       | W      | Sep 9                | Supervised Learning           | Introduction to Machine Learning, MLE, MAP, Naive Bayes      | Barnabas | | 2       | M      | Sep 14               |                               | Perceptron, Features, Stochastic Gradient Descent            | Alex     | | 3       | W      | Sep 16               |                               | Neural Networks: Backprop, Layers                            | Alex     | | 4       | M      | Sep 21               |                               | Neural Networks: State, Memory, Representations              | Alex     | | 5       | W      | Sep 23               | Unsupervised Learning         | Clustering, K-Means                                          | Barnabas | | 6       | M      | Sep 28               |                               | Expectation Maximization, Mixture of Gaussians               | Barnabas | | 7       | W      | Sep 30               |                               | Principal Component Analysis                                 | Barnabas | | 8       | M      | Oct 5                | Kernel Machines               | Convex Optimization, Duality, Linear and Quadratic Programs  | Alex     | | 9       | W      | Oct 7                |                               | Support Vector Classification, Regression, Novelty Detection | Alex     | | 10      | M      | Oct 12               |                               | Features, Kernels, Hilbert Spaces                            | Alex     | | 11      | W      | Oct 14               |                               | Gaussian Processes 1                                         | Barnabas | | 12      | M      | Oct 19               |                               | Gaussian Processes 2                                         | Barnabas | | 13      | W      | Oct 21               | Latent Space Models           | Independent Component Analysis                               | Barnabas | | 14      | M      | Oct 26               | Graphical Models              | Hidden Markov Models                                         | Alex     | | 15      | W      | Oct 28               |                               | Directed Models                                              | Alex     | | 16      | M      | Nov 2                |                               | Undirected Models                                            | Alex     | | 17      | W      | Nov 4                |                               | Sampling, Markov Chain Monte Carlo Methods                   | Alex     | | 18      | M      | Nov 9                | Midterm exam                  |                                                              |          | | 19      | W      | Nov 11               | Computational Learning theory | Risk Minimization                                            | Barnabas | | 20      | M      | Nov 16               |                               | VC Dimension                                                 | Barnabas | | 21      | W      | Nov 18               | Nonlinear dim reduction       | Manifold Learning                                            | Barnabas | | 22      | M      | Nov 23               | Big data and Scalability      | Systems for Machine Learning, Parameter server               | Alex     | | W       | Nov 25 | Thanksgiving Holiday |                               |                                                              |          | | 23      | M      | Nov 30               | Project Presentations         |                                                              | students | | 24      | M      | Dec 2                | Project Presentations         |                                                              | students |</description>
<size>111180431590</size>
</item><item>
<title>MIT Course 9.520 - Statistical Learning Theory and Applications, Fall 2015</title>
<category>Course</category>
<infohash>8b47f45382645882a23e0f8d9d9fbb764b3eb378</infohash>
<guid>https://academictorrents.com/details/8b47f45382645882a23e0f8d9d9fbb764b3eb378</guid>
<link>https://academictorrents.com/details/8b47f45382645882a23e0f8d9d9fbb764b3eb378</link>
<description>Course description The class covers foundations and recent advances of Machine Learning from the point of view of Statistical Learning Theory. Understanding intelligence and how to replicate it in machines is arguably one of the greatest problems in science. Learning, its principles and computational implementations, is at the very core of intelligence. During the last decade, for the first time, we have been able to develop artificial intelligence systems that can solve complex tasks considered out of reach. ATM machines read checks, cameras recognize faces, smart phones understand your voice and cars can see and avoid obstacles. The machine learning algorithms that are at the roots of these success stories are trained with labeled examples rather than programmed to solve a task. Among the approaches in modern machine learning, the course focuses on regularization techniques, that provide a theoretical foundation to high- dimensional supervised learning. Besides classic approaches such as Support Vector Machines, the course covers state of the art techniques exploiting data geometry (aka manifold learning), sparsity and a variety of algorithms for supervised learning (batch and online), feature selection, structured prediction and multitask learning. Concepts from optimization theory useful for machine learning are covered in some detail (first order methods, proximal/splitting techniques...). The final part of the course will focus on deep learning networks. It will introduce a theoretical framework connecting the computations within the layers of deep learning networks to kernel machines. It will study an extension of the convolutional layers in order to deal with more general invariance properties and to learn them from implicitly supervised data. This theory of hierarchical architectures may explain how visual cortex learn, in an implicitly supervised way, data representation that can lower the sample complexity of a final supervised learning stage. The goal of this class is to provide students with the theoretical knowledge and the basic intuitions needed to use and develop effective machine learning solutions to challenging problems. Prerequisites We will make extensive use of linear algebra, basic functional analysis (we cover the essentials in class and during the math-camp), basic concepts in probability theory and concentration of measure (also covered in class and during the mathcamp). Students are expected to be familiar with MATLAB. | Class                     | Date       | Title                                                              | Instructor(s) | |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| | Class 01                  | Wed Sep 09 | The Course at a Glance                                             | TP            | | Class 02                  | Mon Sep 14 | The Learning Problem and Regularization                            | TP            | | Class 03                  | Wed Sep 16 | Math Camp                                                          | CF/CC         | | Class 04                  | Mon Sep 21 | Reproducing Kernel Hilbert Spaces                                  | LR            | | Class 05                  | Wed Sep 23 | Dictionaries, Feature Maps and Mercer Theorem                      | LR            | | Class 06                  | Mon Sep 28 | Tikhonov Regularization and the Representer Theorem                | LR            | | Class 07                  | Wed Sep 30 | Logistic Regression and Support Vector Machines                    | LR            | | Class 08                  | Mon Oct 05 | Regularized Least Squares                                          | LR            | | Class 09                  | Wed Oct 07 | Iterative Regularization via Early Stopping                        | LR            | | Mon Oct 12 - Columbus Day |            |                                                                    |               | | Class 10                  | Tue Oct 13 | Sparsity Based Regularization                                      | LR            | | Class 11                  | Wed Oct 14 | Proximal Methods                                                   | LR            | | Class 12                  | Mon Oct 19 | Structured Sparsity Regularization                                 | LR            | | Class 13                  | Wed Oct 21 | Multiple Kernel Learning                                           | LR            | | Class 14                  | Mon Oct 26 | Generalization Bounds, Intro to Stability                          | CF/TP         | | Class 15                  | Wed Oct 28 | Stability of Tikhonov Regularization                               | CF/TP         | | Class 16                  | Mon Nov 02 | Consistency, Learnability and Regularization                       | LR            | | Class 17                  | Wed Nov 04 | On-line Learning                                                   | LR            | | Class 18                  | Mon Nov 09 | Manifold Regularization                                            | LR            | | Wed Nov 11 - Veterans Day |            |                                                                    |               | | Class 19                  | Mon Nov 16 | Regularization for Multi-Output Learning I                         | LR            | | Class 20                  | Wed Nov 18 | Regularization for Multi-Output Learning II                        | CC            | | Class 21                  | Mon Nov 23 | Learning Data Representation: from Fourier to Compressed Sensing   | LR            | | Class 22                  | Wed Nov 25 | Learning Data Representation: Autoencoders and Dictionary Learning | LR            | | Class 23                  | Mon Nov 30 | Learning Data Representation: Deep Neural Networks (DNNs)          | LR            | | Class 24                  | Wed Dec 02 | Learning Data Representation: Deep Theory I                        | TP            | | Class 25                  | Mon Dec 07 | Learning Data Representation: DNN Tips and Tricks                  | Gemma Roig    | | Class 26                  | Wed Dec 09 | Learning Data Representation: Deep Theory II                       | TP            |</description>
<size>41055413520</size>
</item><item>
<title>NIST 8-Bit Gray Scale Images of Fingerprint Image Groups (FIGS)</title>
<category>Dataset</category>
<infohash>d7e67e86f0f936773f217dbbb9c149c4d98748c6</infohash>
<guid>https://academictorrents.com/details/d7e67e86f0f936773f217dbbb9c149c4d98748c6</guid>
<link>https://academictorrents.com/details/d7e67e86f0f936773f217dbbb9c149c4d98748c6</link>
<description>The NIST database of fingerprint images contains 2000 8-bit gray scale fingerprint image pairs. Each image is 512-by-512 pixels with 32 rows of white space at the bottom and classified using one of the five following classes:</description>
<size>823994914</size>
</item><item>
<title>Regularization Methods for Machine Learning 2016</title>
<category>Course</category>
<infohash>493251615310f9b6ae1f483126292378137074cd</infohash>
<guid>https://academictorrents.com/details/493251615310f9b6ae1f483126292378137074cd</guid>
<link>https://academictorrents.com/details/493251615310f9b6ae1f483126292378137074cd</link>
<description>Understanding how intelligence works and how it can be emulated in machines is an age old dream and arguably one of the biggest challenges in modern science. Learning, with its principles and computational implementations, is at the very core of this endeavor. Recently, for the first time, we have been able to develop artificial intelligence systems able to solve complex tasks considered out of reach for decades. Modern cameras recognize faces, and smart phones voice commands, cars can see and detect pedestrians and ATM machines automatically read checks. In most cases at the root of these success stories there are machine learning algorithms, that is softwares that are trained rather than programmed to solve a task. Among the variety of approaches to modern computational learning, we focus on regularization techniques, that are key to high- dimensional learning. Regularization methods allow to treat in a unified way a huge class of diverse approaches, while providing tools to design new ones. Starting from classical notions of smoothness, shrinkage and margin, the course will cover state of the art techniques based on the concepts of geometry (aka manifold learning), sparsity and a variety of algorithms for supervised learning, feature selection, structured prediction, multitask learning and model selection. Practical applications for high dimensional problems, in particular in computational vision, will be discussed. The classes will focus on algorithmic and methodological aspects, while trying to give an idea of the underlying theoretical underpinnings. Practical laboratory sessions will give the opportunity to have hands on experience. RegML is a 20 hours advanced machine learning course including theory classes and practical laboratory sessions. The course covers foundations as well as recent advances in Machine Learning with emphasis on high dimensional data and a core set techniques, namely regularization methods. In many respect the course is compressed version of the 9.520 course at MIT". | CLASS | DAY      | TIME          | SUBJECT                                                                                              | FILES  | |&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| | 1     | Mon 6/27 | 9:30 - 11:00  | Introduction to Statistical Machine Learning                                                         | Lect_1 | | 2     | Mon 6/27 | 11:30 - 13:00 | Tikhonov Regularization and Kernels                                                                  | Lect_2 | | 3     | Mon 6/27 | 14:00 - 16:00 | Laboratory 1: Binary classification and model selection                                              | Lab 1  | | 4     | Tue 6/28 | 9:30 - 11:00  | Early Stopping and Spectral Regularization                                                           | Lect_3 | | 5     | Tue 6/28 | 11:30 - 13:00 | Regularization for Multi-task Learning                                                               | Lect_4 | | 6     | Tue 6/28 | 14:00 - 16:00 | Laboratory 2: Spectral filters and multi-class classification                                        | Lab 2  | | -     | Wed 6/29 | 9:30 - 10:00  | Workshop: Federico Girosi - Health Analytics and Machine Learning                                    |        | | -     | Wed 6/29 | 10:00 - 10:30 | Workshop: Massimiliano Pontil - A Class of Regularizers based on Optimal Interpolation               |        | | -     | Wed 6/29 | 10:30 - 11:00 | Workshop: Gadi Geiger - Visual and Auditory Aspects of Perception in Developmental Dyslexia          |        | | -     | Wed 6/29 | 11:00 - 11:30 | Coffee Break                                                                                         |        | | -     | Wed 6/29 | 11:30 - 12:00 | Workshop: Alessandro Verri - Extracting Biomedical Knowledge through Regularized Learning Techniques |        | | -     | Wed 6/29 | 12:00 - 12:30 | Workshop: Thomas Vetter - Learning the Appearance of Faces: Probabilistic Morphable Models           |        | | -     | Wed 6/29 | Afternoon     | Free                                                                                                 |        | | 7     | Thu 6/30 | 9:30 - 11:00  | Sparsity Based Regularization                                                                        | Lect_5 | | 8     | Thu 6/30 | 11:30 - 13:00 | Structured Sparsity                                                                                  | Lect_6 | | 9     | Thu 6/30 | 14:00 - 16:00 | Laboratory 3: Sparsity-based learning                                                                | Lab 3  | | 10    | Fri 7/1  | 9:30 - 11:00  | Data Representation: Dictionary Learning                                                             | Lect_7 | | 11    | Fri 7/1  | 11:30 - 13:00 | Data Representation: Deep Learning                                                                   | Lect_8 |</description>
<size>4589105323</size>
</item><item>
<title>Large Scale Machine Learning - UToronto - STA 4273H Winter 2015</title>
<category>Course</category>
<infohash>deb96e8d1f88d9b3a09098ce27c986507ae97b5e</infohash>
<guid>https://academictorrents.com/details/deb96e8d1f88d9b3a09098ce27c986507ae97b5e</guid>
<link>https://academictorrents.com/details/deb96e8d1f88d9b3a09098ce27c986507ae97b5e</link>
<description>Lecture 1 &amp;mdash; Machine Learning: Introduction to Machine Learning, Linear Models for Regression Reading: Bishop, Chapter 1: sec. 1.1 - 1.5. and Chapter 3: sec. 1.1 - 1.3. Optional: Bishop, Chapter 2: Backgorund material; Hastie, Tibshirani, Friedman, Chapters 2 and 3. Lecture 2 &amp;mdash; Bayesian Framework: Bayesian Linear Regression, Evidence Maximization. Linear Models for Classification. Reading: Bishop, Chapter 3: sec. 3.3 - 3.5. Chapter 4. Optional: Radford Neal s NIPS tutorial on Bayesian Methods for Machine Learning:. Also see Max Welling s notes on Fisher Linear Discriminant Analysis Lecture 3 &amp;mdash; Classification Linear Models for Classification, Generative and Discriminative approaches, Laplace Approximation. Reading: Bishop, Chapter 4. Optional: Hastie, Tibshirani, Friedman, Chapter 4. Lecture 4 &amp;mdash; Graphical Models: Bayesian Networks, Markov Random Fields Reading: Bishop, Chapter 8. Optional: Hastie, Tibshirani, Friedman, Chapter 17 (Undirected Graphical Models). MacKay, Chapter 21 (Bayesian nets) and Chapter 43 (Boltzmann mchines). Also see this paper on Graphical models, exponential families, and variational inference by M. Wainwright and M. Jordan, Foundations and Trends in Machine Learning Lecture 5 &amp;mdash; Mixture Models and EM: Mixture of Gaussians, Generalized EM, Variational Bound. Reading: Bishop, Chapter 9. Optional: Hastie, Tibshirani, Friedman, Chapter 13 (Prototype Methods). MacKay, Chapter 22 (Maximum Likelihood and Clustering). Lecture 6 &amp;mdash; Variational Inference Mean-Field, Bayesian Mixture models, Variational Bound. Reading: Bishop, Chapter 10. Optional: MacKay, Chapter 33 (Variational Inference). Lecture 7 - Sampling Methods Rejection Sampling, Importance sampling, M-H and Gibbs. Reading: Bishop, Chapter 11. Optional: MacKay, Chapter 29 (Monte Carlo Methods). Lecture 8 &amp;mdash; Continuous Latent Variable Models PCA, FA, ICA, Deep Autoencders Reading: Bishop, Chapter 12. Optional: Hastie, Tibshirani, Friedman, Chapters 14.5, 14.7, 14.9 (PCA, ICA, nonlinear dimensionality reduction). MacKay, Chapter 34 (Latent Variable Models). Lecture 9 &amp;mdash; Modeling Sequential Data HMMs, LDS, Particle Filters. Reading: Bishop, Chapter 13.</description>
<size>3145762100</size>
</item><item>
<title>7,000 Hillary Clinton Emails</title>
<category>Dataset</category>
<infohash>1789304373ffd2846c175107709f76022111385c</infohash>
<guid>https://academictorrents.com/details/1789304373ffd2846c175107709f76022111385c</guid>
<link>https://academictorrents.com/details/1789304373ffd2846c175107709f76022111385c</link>
<description>7,000 of Hillary Clinton’s personal emails from her time as secretary of state. The emails touch on a variety of issues from WikiLeaks to gefilte fish. Clinton s use of a personal email account has become a heated issue during the 2016 presidential campaign, as some say she should have been using a government account, rather than her personal one, while in office. Critics have called the affair a scandal, implying the former secretary of state and Democratic presidential front-runner has something to hide.</description>
<size>5259657216</size>
</item><item>
<title>GC-MS Database NIST/EPA/NIH MASS SPECTRAL LIBRARY (NIST 08) + update 2010 2.0f Apr 1 2009 x86 [2008, ENG]</title>
<category>Dataset</category>
<infohash>d802a61207d2cefb71face029b5227187ba77463</infohash>
<guid>https://academictorrents.com/details/d802a61207d2cefb71face029b5227187ba77463</guid>
<link>https://academictorrents.com/details/d802a61207d2cefb71face029b5227187ba77463</link>
<description>GC-MS Database NIST/EPA/NIH MASS SPECTRAL LIBRARY (NIST 08) + update 2010 2.0f Apr 1 2009 x86 [2008, ENG] This library package contains the NIST 2008 Mass Spectral Library in the following manufacturer formats: 1. Agilent Chemstation (.L) (with structures) 2. NIST MS Search (compatible with most mass spectrometry software brands): Bruker; JEOL; LECO; PerkinElmer TurboMass; Thermo Electron XCalibur; Varian MS Workstation; Waters MassLynx; and other brands 3. PerkinElmer TurboMass (IDB) (with structures) 4. Shimadzu GCMS Solution (QP5000) (SPC) (no structures) 5. Waters MassLynx (IDB) (with structures) 6. Finnigan GCQ/Varian ITS-40 7. Thermo Galactic Spectral ID Includes: - Over 220,000 spectra, - Over 190,000 chemical structures, and - GC Retention Index Library, MS/MS Library - Licenses keys</description>
<size>651990202</size>
</item><item>
<title>Avantes Dual Spectrograph.zip</title>
<category>Dataset</category>
<infohash>ff051a9469c9ceda93ea914a21639a639cbae793</infohash>
<guid>https://academictorrents.com/details/ff051a9469c9ceda93ea914a21639a639cbae793</guid>
<link>https://academictorrents.com/details/ff051a9469c9ceda93ea914a21639a639cbae793</link>
<description>For debugging dual spectrograph.</description>
<size>21477445031</size>
</item><item>
<title>True Marble Global Image Dataset GeoTIFF</title>
<category>Dataset</category>
<infohash>b9b284d9c0074846fee28e78aac4440fd7c0f51c</infohash>
<guid>https://academictorrents.com/details/b9b284d9c0074846fee28e78aac4440fd7c0f51c</guid>
<link>https://academictorrents.com/details/b9b284d9c0074846fee28e78aac4440fd7c0f51c</link>
<description>Download and use the 250m True Marble global dataset for free! This is a low resolution version of our full 15m product, but it is quite useful. Download to use on your web page or preview a purchase. We only ask that you display our copyright and reference this page when using it. Two types of files are available for download: GeoTIFF and PNG. The GeoTIFF files are better suited for GIS programs, but are generally a larger file size. The PNG files are for general image processing programs, but are not georeferenced. Most of these files are much too large for your web browser to display, so be sure to save the file directly to disk. ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.A1.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.B1.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.C1.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.D1.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.A2.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.B2.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.C2.jpg) ![](http://www.unearthedoutdoors.net/imgs/global_data/thumbs/TrueMarble.500m.D2.jpg)</description>
<size>9737051000</size>
</item><item>
<title>Sentiment Labelled Sentences Data Set </title>
<category>Dataset</category>
<infohash>07e05fc1229555e124df72160a01b2540d04cebf</infohash>
<guid>https://academictorrents.com/details/07e05fc1229555e124df72160a01b2540d04cebf</guid>
<link>https://academictorrents.com/details/07e05fc1229555e124df72160a01b2540d04cebf</link>
<description>This dataset was created for the Paper  From Group to Individual Labels using Deep Features , Kotzias et. al,. KDD 2015 Please cite the paper if you want to use it :) It contains sentences labelled with positive or negative sentiment. ### Format: sentence score ### Details: Score is either 1 (for positive) or 0 (for negative) The sentences come from three different websites/fields: imdb.com amazon.com yelp.com For each website, there exist 500 positive and 500 negative sentences. Those were selected randomly for larger datasets of reviews. We attempted to select sentences that have a clearly positive or negative connotaton, the goal was for no neutral sentences to be selected. ### Attribute Information: The attributes are text sentences, extracted from reviews of products, movies, and restaurants ### Relevant Papers:  From Group to Individual Labels using Deep Features , Kotzias et. al,. KDD 2015</description>
<size>512208</size>
</item><item>
<title>Enron Email Dataset</title>
<category>Dataset</category>
<infohash>4697a6e1e7841602651b087d84f904d43590d4ff</infohash>
<guid>https://academictorrents.com/details/4697a6e1e7841602651b087d84f904d43590d4ff</guid>
<link>https://academictorrents.com/details/4697a6e1e7841602651b087d84f904d43590d4ff</link>
<description>To quote the data source: "This dataset was collected and prepared by the CALO Project (A Cognitive Assistant that Learns and Organizes). It contains data from about 150 users, mostly senior management of Enron, organized into folders. The corpus contains a total of about 0.5M messages. This data was originally made public, and posted to the web, by the Federal Energy Regulatory Commission during its investigation. The email dataset was later purchased by Leslie Kaelbling at MIT, and turned out to have a number of integrity problems. A number of folks at SRI, notably Melinda Gervasio, worked hard to correct these problems, and it is thanks to them (not me) that the dataset is available. The dataset here does not include attachments, and some messages have been deleted "as part of a redaction effort due to requests from affected employees". Invalid email addresses were converted to something of the form user@enron.com whenever possible (i.e., recipient is specified in some parse-able format like "Doe, John" or "Mary K. Smith") and to no_address@enron.com when no recipient was specified. I get a number of questions about this corpus each week, which I am unable to answer, mostly because they deal with preparation issues and such that I just don t know about. If you ask me a question and I don t answer, please don t feel slighted. I am distributing this dataset as a resource for researchers who are interested in improving current email tools, or understanding how email is currently used. This data is valuable; to my knowledge it is the only substantial collection of "real" email that is public. The reason other datasets are not public is because of privacy concerns. In using this dataset, please be sensitive to the privacy of the people involved (and remember that many of these people were certainly not involved in any of the actions which precipitated the investigation.)"</description>
<size>443254787</size>
</item><item>
<title>A collection of sport activity datasets for data analysis and data mining 2016b</title>
<category>Dataset</category>
<infohash>2a81590d3b32e6ddd8a87f1ec4f08205098476ee</infohash>
<guid>https://academictorrents.com/details/2a81590d3b32e6ddd8a87f1ec4f08205098476ee</guid>
<link>https://academictorrents.com/details/2a81590d3b32e6ddd8a87f1ec4f08205098476ee</link>
<description/>
<size>623309070</size>
</item><item>
<title>Geocities - The Torrent</title>
<category>Dataset</category>
<infohash>2dc18f47afee0307e138dab3015ee7e5154766f6</infohash>
<guid>https://academictorrents.com/details/2dc18f47afee0307e138dab3015ee7e5154766f6</guid>
<link>https://academictorrents.com/details/2dc18f47afee0307e138dab3015ee7e5154766f6</link>
<description>This is a collection of Geocities data downloaded by a bunch of people who call themselves ARCHIVE TEAM, who began scraping the Yahoo! Geocities site during a six month period in 2009, before Yahoo! shut down geocities.com on October 26th, 2009. This collection is compressed in a UNIX filesystem with both 7zip archives and tape archives (gtar). If you re a bit of a data tourist and just want to waft in the scent of a web era gone by, please go to one of the Geocities mirrors that were put up in the wake of the end of Geocities. As of this writing, these mirrors include:     http://www.reocities.com http://www.geocities.ws http://www.geociti.es http://www.oocities.org/     Logo: https://i.imgur.com/A7sWsiy.png</description>
<size>688701020706</size>
</item><item>
<title>Collaborative peer production as an alternative to hierarchical internet based business systems</title>
<category>Paper</category>
<infohash>42cf6c93c7e382bcdcec06007a72f35084f3ec7b</infohash>
<guid>https://academictorrents.com/details/42cf6c93c7e382bcdcec06007a72f35084f3ec7b</guid>
<link>https://academictorrents.com/details/42cf6c93c7e382bcdcec06007a72f35084f3ec7b</link>
<description>As we move towards more data intensive, device centric global communication networks, our ability to usefully harvest these large datastores is degrading. The widening asymmetry in the explosive growth of data versus our ability to use it, is forcing us towards centralized analytics. This splintered concentration of data further consolidates analytical capabilities in the hands of the few and divides the network into the analysors and the analysed. The fracturing of the system into opaque datastores and analytics blocks creates a strong positive feedback loop and has a significant negative impact on the stability, transparency and freedom of the network. This paper attempted to identify problems associated with the internet, internet dependent business models and reviewing available solutions and discuss possible solutions which became necessary.</description>
<size>711376</size>
</item><item>
<title>MXNet pre-trained model Full ImageNet Network inception-21k.tar.gz</title>
<category>Dataset</category>
<infohash>27330fbd1ec0648e72b2cf5c40aa0d4df1931221</infohash>
<guid>https://academictorrents.com/details/27330fbd1ec0648e72b2cf5c40aa0d4df1931221</guid>
<link>https://academictorrents.com/details/27330fbd1ec0648e72b2cf5c40aa0d4df1931221</link>
<description># Full ImageNet Network This model is a pretrained model on full imagenet dataset [1] with 14,197,087 images in 21,841 classes. The model is trained by only random crop and mirror augmentation. The network is based on Inception-BN network [2], and added more capacity. This network runs roughly 2 times slower than standard Inception-BN Network. We trained this network on a machine with 4 GeForce GTX 980 GPU. Each round costs 23 hours, the released model is the 9 round. Train Top-1 Accuracy over 21,841 classes: 37.19% Single image prediction memory requirement: 15MB ILVRC2012 Validation Performance: |        | Over 1,000 classes | Over 21,841 classes | | &amp;mdash;&amp;mdash;&amp;mdash; | &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; | &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- | | Top-1  | 68.3%              | 41.9%               | | Top-5  | 89.0%              | 69.6%               |</description>
<size>125141204</size>
</item><item>
<title>[Coursera] Major Depression in the Population: A Public Health Approach</title>
<category>Course</category>
<infohash>bef456926de91db59d256b7ca14d466ecaf1efea</infohash>
<guid>https://academictorrents.com/details/bef456926de91db59d256b7ca14d466ecaf1efea</guid>
<link>https://academictorrents.com/details/bef456926de91db59d256b7ca14d466ecaf1efea</link>
<description/>
<size>424824209</size>
</item><item>
<title>[Coursera] Neuroethics by Jonathan D. Moreno (University of Pennsylvania)</title>
<category>Course</category>
<infohash>b7bfebe683ee21c8c9d7b47cc42655e3d91ead8b</infohash>
<guid>https://academictorrents.com/details/b7bfebe683ee21c8c9d7b47cc42655e3d91ead8b</guid>
<link>https://academictorrents.com/details/b7bfebe683ee21c8c9d7b47cc42655e3d91ead8b</link>
<description/>
<size>858794417</size>
</item><item>
<title>[Coursera] Nudge-It Understanding Obesity (University of Edinburgh)</title>
<category>Course</category>
<infohash>d742c87581ac5d231011cb80836502577522e089</infohash>
<guid>https://academictorrents.com/details/d742c87581ac5d231011cb80836502577522e089</guid>
<link>https://academictorrents.com/details/d742c87581ac5d231011cb80836502577522e089</link>
<description/>
<size>244887641</size>
</item><item>
<title>[Coursera] The Social Context of Mental Health and Illness (University of Toronto)</title>
<category>Course</category>
<infohash>ab9ba816268fd74292d888cfb3f413803a178b90</infohash>
<guid>https://academictorrents.com/details/ab9ba816268fd74292d888cfb3f413803a178b90</guid>
<link>https://academictorrents.com/details/ab9ba816268fd74292d888cfb3f413803a178b90</link>
<description/>
<size>1525882637</size>
</item><item>
<title>[Coursera] What A Plant Knows (Tel Aviv University)</title>
<category>Course</category>
<infohash>51368bd165b56d4bb56e8106a6e4621914d533e8</infohash>
<guid>https://academictorrents.com/details/51368bd165b56d4bb56e8106a6e4621914d533e8</guid>
<link>https://academictorrents.com/details/51368bd165b56d4bb56e8106a6e4621914d533e8</link>
<description/>
<size>864039049</size>
</item><item>
<title>[Coursera] Useful Genetics Part II by Rosemary Redfield (University of British Columbia)</title>
<category>Course</category>
<infohash>1ac95dc99ded62ac6bffd24aa3e1b6acb587635b</infohash>
<guid>https://academictorrents.com/details/1ac95dc99ded62ac6bffd24aa3e1b6acb587635b</guid>
<link>https://academictorrents.com/details/1ac95dc99ded62ac6bffd24aa3e1b6acb587635b</link>
<description/>
<size>1387625543</size>
</item><item>
<title>[Coursera] Preventing Chronic Pain A Human Systems Approach (University of Minnesota)</title>
<category>Course</category>
<infohash>271e22f67c7bda33158f2d83f443728cd749cd42</infohash>
<guid>https://academictorrents.com/details/271e22f67c7bda33158f2d83f443728cd749cd42</guid>
<link>https://academictorrents.com/details/271e22f67c7bda33158f2d83f443728cd749cd42</link>
<description/>
<size>3847421323</size>
</item><item>
<title>[Coursera] Canine Theriogenology for Dog Enthusiasts by Margaret V. Root (University of Minnesota)</title>
<category>Course</category>
<infohash>a673ea4f9de6bd657750a7ff56723d58eac24f87</infohash>
<guid>https://academictorrents.com/details/a673ea4f9de6bd657750a7ff56723d58eac24f87</guid>
<link>https://academictorrents.com/details/a673ea4f9de6bd657750a7ff56723d58eac24f87</link>
<description/>
<size>491328190</size>
</item><item>
<title>[Coursera] Useful Genetics Part I by Rosemary Redfield (University of British Columbia)</title>
<category>Course</category>
<infohash>d0262f08717ba584551357e4bf8c1945dc6d6935</infohash>
<guid>https://academictorrents.com/details/d0262f08717ba584551357e4bf8c1945dc6d6935</guid>
<link>https://academictorrents.com/details/d0262f08717ba584551357e4bf8c1945dc6d6935</link>
<description/>
<size>1850217192</size>
</item><item>
<title>[Coursera] Introduction to Psychology as a Science (Georgia Institute of Technology)</title>
<category>Course</category>
<infohash>3fdc727c3f4bd1bf2eb5adb13e1984fd2526c46e</infohash>
<guid>https://academictorrents.com/details/3fdc727c3f4bd1bf2eb5adb13e1984fd2526c46e</guid>
<link>https://academictorrents.com/details/3fdc727c3f4bd1bf2eb5adb13e1984fd2526c46e</link>
<description/>
<size>1344522741</size>
</item><item>
<title>[Coursera] VLSI CAD: Logic to Layout by Rob A. Rutenbar (University of Illinois at Urbana-Champaign)</title>
<category>Course</category>
<infohash>625ae5f99f1cfdc2b8eb42577ca5271ad78967e0</infohash>
<guid>https://academictorrents.com/details/625ae5f99f1cfdc2b8eb42577ca5271ad78967e0</guid>
<link>https://academictorrents.com/details/625ae5f99f1cfdc2b8eb42577ca5271ad78967e0</link>
<description/>
<size>1436500392</size>
</item><item>
<title>[Coursera] Greek and Roman Mythology (University of Pennsylvania)</title>
<category>Course</category>
<infohash>01212a89f842f5ad1364c7fa3a8c4c514e2b26ab</infohash>
<guid>https://academictorrents.com/details/01212a89f842f5ad1364c7fa3a8c4c514e2b26ab</guid>
<link>https://academictorrents.com/details/01212a89f842f5ad1364c7fa3a8c4c514e2b26ab</link>
<description/>
<size>2299704331</size>
</item><item>
<title>[Coursera] Beginning Game Programming with C# (University of Colorado System)</title>
<category>Course</category>
<infohash>0a7ba7e62821e488a0061751fdb81f4298733bea</infohash>
<guid>https://academictorrents.com/details/0a7ba7e62821e488a0061751fdb81f4298733bea</guid>
<link>https://academictorrents.com/details/0a7ba7e62821e488a0061751fdb81f4298733bea</link>
<description/>
<size>2127977730</size>
</item><item>
<title>[Coursera] - (Operating Systems) by Professor Chen Xiangqun (Peking University)</title>
<category>Course</category>
<infohash>3fca7db35ba4773d416cc6a297a196cec41fa9f4</infohash>
<guid>https://academictorrents.com/details/3fca7db35ba4773d416cc6a297a196cec41fa9f4</guid>
<link>https://academictorrents.com/details/3fca7db35ba4773d416cc6a297a196cec41fa9f4</link>
<description/>
<size>3066748519</size>
</item><item>
<title>[Coursera] Nanotechnology: The Basics by Professor Vicki Colvin, Daniel Mittleman (Rice University)</title>
<category>Course</category>
<infohash>e54694787f54a7a0096d9b0f062d31a3a3925ae8</infohash>
<guid>https://academictorrents.com/details/e54694787f54a7a0096d9b0f062d31a3a3925ae8</guid>
<link>https://academictorrents.com/details/e54694787f54a7a0096d9b0f062d31a3a3925ae8</link>
<description/>
<size>2392720056</size>
</item><item>
<title>[Coursera] Analysis of Algorithms by Robert Sedgewick (Princeton University)</title>
<category>Course</category>
<infohash>ebdac281fb3e9ed0a5c7abadcda3575f54c73a8f</infohash>
<guid>https://academictorrents.com/details/ebdac281fb3e9ed0a5c7abadcda3575f54c73a8f</guid>
<link>https://academictorrents.com/details/ebdac281fb3e9ed0a5c7abadcda3575f54c73a8f</link>
<description/>
<size>1882637828</size>
</item><item>
<title>[Coursera] Mathematical Methods for Quantitative Finance by Dr. Kjell Konis (University of Washington)</title>
<category>Course</category>
<infohash>dfc1ddde962101f00ef9764b91181bd6bb5c9e93</infohash>
<guid>https://academictorrents.com/details/dfc1ddde962101f00ef9764b91181bd6bb5c9e93</guid>
<link>https://academictorrents.com/details/dfc1ddde962101f00ef9764b91181bd6bb5c9e93</link>
<description/>
<size>6421236690</size>
</item><item>
<title>[Coursera] How to Succeed in College by Dr. Jonathan Golding, Dr. Phil Kraemer (University of Kentucky)</title>
<category>Course</category>
<infohash>956ac58723e9f39e766741a6ea794006d20807c3</infohash>
<guid>https://academictorrents.com/details/956ac58723e9f39e766741a6ea794006d20807c3</guid>
<link>https://academictorrents.com/details/956ac58723e9f39e766741a6ea794006d20807c3</link>
<description/>
<size>754196560</size>
</item><item>
<title>[Coursera] Web Intelligence and Big Data by Dr. Gautam Shroff</title>
<category>Course</category>
<infohash>f871965e36ec21677d44375930d6cbb0dab7bfb3</infohash>
<guid>https://academictorrents.com/details/f871965e36ec21677d44375930d6cbb0dab7bfb3</guid>
<link>https://academictorrents.com/details/f871965e36ec21677d44375930d6cbb0dab7bfb3</link>
<description/>
<size>1478650060</size>
</item><item>
<title>[Coursera] Geodesign: Change Your World (Pennsylvania State University)</title>
<category>Course</category>
<infohash>668ad6d48cd6806e83a049d375c95aa9962a3ff3</infohash>
<guid>https://academictorrents.com/details/668ad6d48cd6806e83a049d375c95aa9962a3ff3</guid>
<link>https://academictorrents.com/details/668ad6d48cd6806e83a049d375c95aa9962a3ff3</link>
<description/>
<size>452566789</size>
</item><item>
<title>[Coursera] Experimental Genome Science (University of Pennsylvania)</title>
<category>Course</category>
<infohash>256afd4caa8e8a79eb8c7fc0d12431d430a0cba7</infohash>
<guid>https://academictorrents.com/details/256afd4caa8e8a79eb8c7fc0d12431d430a0cba7</guid>
<link>https://academictorrents.com/details/256afd4caa8e8a79eb8c7fc0d12431d430a0cba7</link>
<description/>
<size>1351834947</size>
</item><item>
<title>In Situ Analysis of a Silver Nanoparticle-Precipitating Shewanella Biofilm by Surface Enhanced Confocal Raman Microscopy</title>
<category>Paper</category>
<infohash>8833ea3c093135c41376aa992a0b17a893c581de</infohash>
<guid>https://academictorrents.com/details/8833ea3c093135c41376aa992a0b17a893c581de</guid>
<link>https://academictorrents.com/details/8833ea3c093135c41376aa992a0b17a893c581de</link>
<description>Shewanella oneidensis MR-1 is an electroactive bacterium, capable of reducing extracellular insoluble electron acceptors, making it important for both nutrient cycling in nature and microbial electrochemical technologies, such as microbial fuel cells and microbial electrosynthesis. When allowed to anaerobically colonize an Ag/AgCl solid interface, S oneidensishas precipitated silver nanoparticles (AgNp), thus providing the means for a surface enhanced confocal Raman microscopy (SECRaM) investigation of its biofilm. The result is the in-situchemical mapping of the biofilm as it developed over time, where the distribution of cytochromes, reduced and oxidized flavins, polysaccharides and phosphate in the undisturbed biofilm is monitored. Utilizing AgNp bio-produced by the bacteria colonizing the Ag/AgCl interface, we could perform SECRaM while avoiding the use of a patterned or roughened support or the introduction of noble metal salts and reducing agents. This new method will allow a spatially and temporally resolved chemical investigation not only of Shewanella biofilms at an insoluble electron acceptor, but also of other noble metal nanoparticle-precipitating bacteria in laboratory cultures or in complex microbial communities in their natural habitats</description>
<size>9854252</size>
</item><item>
<title>Reconstruction of extracellular respiratory pathways for iron(III) reduction in Shewanella oneidensis strain MR-1.pdf</title>
<category>Paper</category>
<infohash>c31ee63bff0513661e09d8cb73eec3446e691465</infohash>
<guid>https://academictorrents.com/details/c31ee63bff0513661e09d8cb73eec3446e691465</guid>
<link>https://academictorrents.com/details/c31ee63bff0513661e09d8cb73eec3446e691465</link>
<description>Shewanella oneidensis strain MR-1 is a facultative anaerobic bacterium capable of respiring a multitude of electron acceptors, many of which require the Mtr respiratory pathway. The core Mtr respiratory pathway includes a periplasmic c-type cytochrome (MtrA), an integral outer-membrane β-barrel protein (MtrB), and an outer-membrane-anchored c-type cytochrome (MtrC). Together, these components facilitate transfer of electrons from the c-type cytochrome CymA in the cytoplasmic membrane to electron acceptors at and beyond the outer-membrane. The genes encoding these core proteins have paralogs in the S. oneidensis genome (mtrB and mtrA each have four while mtrC has three) and some of the paralogs of mtrC and mtrA are able to form functional Mtr complexes. We demonstrate that of the additional three mtrB paralogs found in the S. oneidensis genome, only MtrE can replace MtrB to form a functional respiratory pathway to soluble iron(III) citrate. We also evaluate which mtrC/mtrA paralog pairs (a total of 12 combinations) are able to form functional complexes with endogenous levels of mtrB paralog expression. Finally, we reconstruct all possible functional Mtr complexes and test them in a S. oneidensis mutant strain where all paralogs have been eliminated from the genome. We find that each combination tested with the exception of MtrA/MtrE/OmcA is able to reduce iron(III) citrate at a level significantly above background. The results presented here have implications toward the evolution of anaerobic extracellular respiration in Shewanella and for future studies looking to increase the rates of substrate reduction for water treatment, bioremediation, or electricity production.</description>
<size>2238509</size>
</item><item>
<title>ISBI Challenge: Segmentation of neuronal structures in EM stacks</title>
<category>Dataset</category>
<infohash>42714f859770f1a9d8b27985f9f16ea17e8ba2f6</infohash>
<guid>https://academictorrents.com/details/42714f859770f1a9d8b27985f9f16ea17e8ba2f6</guid>
<link>https://academictorrents.com/details/42714f859770f1a9d8b27985f9f16ea17e8ba2f6</link>
<description>|File Name|Description| |&amp;mdash;&amp;mdash;&amp;mdash;| |train-volume.tif (7.5 MB)	|Original training image, 8-bit grayscale, 512x512x30 pixels| |train-labels.tif (7.5 MB)|	Training image labels (0 - membranes, 255 - non-membranes), 8-bit grayscale, 512x512x30 pixels| |test-volume.tif (7.5 MB)	|Test image, 8-bit grayscale, 512x512x30 pixels| The training and test datasets are two stacks of 30 sections from a serial section Transmission Electron Microscopy (ssTEM) data set of the Drosophila first instar larva ventral nerve cord (VNC). The microcube measures 2 x 2 x 1.5 microns approx., with a resolution of 4x4x50 nm/pixel.</description>
<size>23611963</size>
</item><item>
<title>The Visual Genome Dataset v1.0 + v1.2 Images</title>
<category>Dataset</category>
<infohash>1bfe6871046860a2ff8c0cc1414318beb35dc916</infohash>
<guid>https://academictorrents.com/details/1bfe6871046860a2ff8c0cc1414318beb35dc916</guid>
<link>https://academictorrents.com/details/1bfe6871046860a2ff8c0cc1414318beb35dc916</link>
<description>Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li Jia-Li, David Ayman Shamma, Michael Bernstein, Li Fei-Fei</description>
<size>15203364040</size>
</item><item>
<title>The Visual Genome Dataset v1.0 Metadata</title>
<category>Dataset</category>
<infohash>ca98efc75a80278b795ce056fd4229c1bc6f229f</infohash>
<guid>https://academictorrents.com/details/ca98efc75a80278b795ce056fd4229c1bc6f229f</guid>
<link>https://academictorrents.com/details/ca98efc75a80278b795ce056fd4229c1bc6f229f</link>
<description>Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li Jia-Li, David Ayman Shamma, Michael Bernstein, Li Fei-Fei image meta data (16.92 MB) region descriptions (988.18 MB) question answers (201.09 MB) objects (99.14 MB) attributes (174.97 MB) relationships (406.70 MB)</description>
<size>263326070</size>
</item><item>
<title>DC-IGN</title>
<category>Dataset</category>
<infohash>dedcdf859969c600e19577029185895160afbf50</infohash>
<guid>https://academictorrents.com/details/dedcdf859969c600e19577029185895160afbf50</guid>
<link>https://academictorrents.com/details/dedcdf859969c600e19577029185895160afbf50</link>
<description>Direct download links for these files are available at https://github.com/willwhitney/dc-ign</description>
<size>13009461794</size>
</item><item>
<title>QuantQuote Free Historical Stock Data 2013</title>
<category>Dataset</category>
<infohash>49daf05ef35c487331013c22450988bbf7e511b0</infohash>
<guid>https://academictorrents.com/details/49daf05ef35c487331013c22450988bbf7e511b0</guid>
<link>https://academictorrents.com/details/49daf05ef35c487331013c22450988bbf7e511b0</link>
<description>The files are formatted as follows: Date, Time, Open, High, Low, Close, Volume Date – This provides the date as an integer where 20100527 would represent May 27th, 2010. Time – This gives the time as an integer where 1426 would represent 2:26PM EST. Open – The open price. High – The high price. Low – The low price. Close – The close price. Volume – The trading volume during the interval. Note that it is extremely difficult to get accurate volume information. The volume is adjusted for splits so that the total value of shares traded remains constant even if a split occurs. https://quantquote.com/docs/QuantQuote_Minute.pdf</description>
<size>36616638</size>
</item><item>
<title>ros_visual_gait_dataset</title>
<category>Dataset</category>
<infohash>ae6b5c0f2b97ec0ac37da11dd6cdea30ddbc4f7e</infohash>
<guid>https://academictorrents.com/details/ae6b5c0f2b97ec0ac37da11dd6cdea30ddbc4f7e</guid>
<link>https://academictorrents.com/details/ae6b5c0f2b97ec0ac37da11dd6cdea30ddbc4f7e</link>
<description>Use case dataset for ros_visual</description>
<size>105031684532</size>
</item><item>
<title>Yelp Restaurant Photo Classification Data</title>
<category>Dataset</category>
<infohash>19c3aa2166d7bfceaf3d76c0d36f812e0f1b87bc</infohash>
<guid>https://academictorrents.com/details/19c3aa2166d7bfceaf3d76c0d36f812e0f1b87bc</guid>
<link>https://academictorrents.com/details/19c3aa2166d7bfceaf3d76c0d36f812e0f1b87bc</link>
<description>At Yelp, there are lots of photos and lots of users uploading photos. These photos provide rich local business information across categories. Teaching a computer to understand the context of these photos is not an easy task. Yelp engineers work on deep learning image classification projects in-house, and you can read about them here. In this competition, you are given photos that belong to a business and asked to predict the business attributes. There are 9 different attributes in this problem: 0: good_for_lunch 1: good_for_dinner 2: takes_reservations 3: outdoor_seating 4: restaurant_is_expensive 5: has_alcohol 6: has_table_service 7: ambience_is_classy 8: good_for_kids These labels are annotated by the Yelp community. Your task is to predict these labels purely from the business photos uploaded by users. Since Yelp is a community driven website, there are duplicated images in the dataset. They are mainly due to: users accidentally upload the same photo to the same business more than once (e.g., this and this) chain businesses which upload the same photo to different branches Yelp is including these as part of the competition, since these are challenges Yelp researchers face every day. File descriptions train_photos.tgz - photos of the training set test_photos.tgz - photos of the test set train_photo_to_biz_ids.csv - maps the photo id to business id test_photo_to_biz_ids.csv - maps the photo id to business id train.csv - main training dataset. Includes the business id s, and their corresponding labels.</description>
<size>14135506706</size>
</item><item>
<title>CS224d: Deep Learning for Natural Language Processing (Spring 2016)</title>
<category>Course</category>
<infohash>dd9b74b50a1292b4b154094b7338ec1d66e8894d</infohash>
<guid>https://academictorrents.com/details/dd9b74b50a1292b4b154094b7338ec1d66e8894d</guid>
<link>https://academictorrents.com/details/dd9b74b50a1292b4b154094b7338ec1d66e8894d</link>
<description>Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models powering NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this spring quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a deep excursion into cutting-edge research in deep learning applied to NLP. The final project will involve training a complex recurrent neural network and applying it to a large scale NLP problem. On the model side we will cover word vector representations, window-based neural networks, recurrent neural networks, long-short-term-memory models, recursive neural networks, convolutional neural networks as well as some very novel models involving a memory component. Through lectures and programming assignments students will learn the necessary engineering tricks for making neural networks work on practical problems.</description>
<size>3829345114</size>
</item><item>
<title>Cern Movie</title>
<category>Dataset</category>
<infohash>a1e5fcc47c73a5e8f86847cda592a248aa9961df</infohash>
<guid>https://academictorrents.com/details/a1e5fcc47c73a5e8f86847cda592a248aa9961df</guid>
<link>https://academictorrents.com/details/a1e5fcc47c73a5e8f86847cda592a248aa9961df</link>
<description>Cern Movie http://i.imgur.com/f1WKPuo.png PS facility</description>
<size>8684450</size>
</item><item>
<title>Force-Directed Drawing Algorithms</title>
<category>Paper</category>
<infohash>aefbc96b80ca671e69890a8d68b2a7ab0c165224</infohash>
<guid>https://academictorrents.com/details/aefbc96b80ca671e69890a8d68b2a7ab0c165224</guid>
<link>https://academictorrents.com/details/aefbc96b80ca671e69890a8d68b2a7ab0c165224</link>
<description/>
<size>1475361</size>
</item><item>
<title>A new data set of educational attainment in the world, 1950–2010 </title>
<category>Paper</category>
<infohash>dc52afe0afe0be281584009a022a6bbfb9edfc9f</infohash>
<guid>https://academictorrents.com/details/dc52afe0afe0be281584009a022a6bbfb9edfc9f</guid>
<link>https://academictorrents.com/details/dc52afe0afe0be281584009a022a6bbfb9edfc9f</link>
<description>"Our panel data set on educational attainment has been updated for 146 countries from 1950 to 2010. The data are disaggregated by sex and by 5-year age intervals. We have improved the accuracy of estimation by using information from consistent census data, disaggregated by age group, along with new estimates of mortality rates and completion rates by age and education level. We compare the estimates with our previous ones (Barro and Lee, 2001) and alternative measures (Cohen and Soto, 2007). Our estimates of educational attainment provide a reasonable proxy for the stock of human capital for a broad group of countries and should be useful for a variety of empirical work. "</description>
<size>950082</size>
</item><item>
<title>Reproducibility in density functional theory calculations of solids</title>
<category>Paper</category>
<infohash>0097a6c5760973c4eeabff0972d85b94583cf061</infohash>
<guid>https://academictorrents.com/details/0097a6c5760973c4eeabff0972d85b94583cf061</guid>
<link>https://academictorrents.com/details/0097a6c5760973c4eeabff0972d85b94583cf061</link>
<description>Density functional theory (DFT) is now routinely used for simulating material properties. Many software packages are available, which makes it challenging to know which are the best to use for a specific calculation. Lejaeghere et al. compared the calculated values for the equation of states for 71 elemental crystals from 15 different widely used DFT codes employing 40 different potentials (see the Perspective by Skylaris). Although there were variations in the calculated values, most recent codes and methods converged toward a single value, with errors comparable to those of experiment.Science, this issue p. 10.1126/science.aad3000; see also p. 1394INTRODUCTIONThe reproducibility of results is one of the underlying principles of science. An observation can only be accepted by the scientific community when it can be confirmed by independent studies. However, reproducibility does not come easily. Recent works have painfully exposed cases where previous conclusions were not upheld. The scrutiny of the scientific community has also turned to research involving computer programs, finding that reproducibility depends more strongly on implementation than commonly thought. These problems are especially relevant for property predictions of crystals and molecules, which hinge on precise computer implementations of the governing equation of quantum physics.RATIONALEThis work focuses on density functional theory (DFT), a particularly popular quantum method for both academic and industrial applications. More than 15,000 DFT papers are published each year, and DFT is now increasingly used in an automated fashion to build large databases or apply multiscale techniques with limited human supervision. Therefore, the reproducibility of DFT results underlies the scientific credibility of a substantial fraction of current work in the natural and engineering sciences. A plethora of DFT computer codes are available, many of them differing considerably in their details of implementation, and each yielding a certain \textquotedblleftprecision\textquotedblright relative to other codes. How is one to decide for more than a few simple cases which code predicts the correct result, and which does not? We devised a procedure to assess the precision of DFT methods and used this to demonstrate reproducibility among many of the most widely used DFT codes. The essential part of this assessment is a pairwise comparison of a wide range of methods with respect to their predictions of the equations of state of the elemental crystals. This effort required the combined expertise of a large group of code developers and expert users.RESULTSWe calculated equation-of-state data for four classes of DFT implementations, totaling 40 methods. Most codes agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Even in the case of pseudization approaches, which largely depend on the atomic potentials used, a similar precision can be obtained as when using the full potential. The remaining deviations are due to subtle effects, such as specific numerical implementations or the treatment of relativistic terms.CONCLUSIONOur work demonstrates that the precision of DFT implementations can be determined, even in the absence of one absolute reference code. Although this was not the case 5 to 10 years ago, most of the commonly used codes and methods are now found to predict essentially identical results. The established precision of DFT codes not only ensures the reproducibility of DFT predictions but also puts several past and future developments on a firmer footing. Any newly developed methodology can now be tested against the benchmark to verify whether it reaches the same level of precision. New DFT applications can be shown to have used a sufficiently precise method. Moreover, high-precision DFT calculations are essential for developing improvements to DFT methodology, such as new density functionals, which may further increase the predictive power of the simulations.Recent DFT methods yield reproducible results.Whereas older DFT implementations predict different values (red darts), codes have now evolved to mutual agreement (green darts). The scoreboard illustrates the good pairwise agreement of four classes of DFT implementations (horizontal direction) with all-electron results (vertical direction). Each number reflects the average difference between the equations of state for a given pair of methods, with the green-to-red color scheme showing the range from the best to the poorest agreement.The widespread popularity of density functional theory has given rise to an extensive range of dedicated codes for predicting molecular and crystalline properties. However, each code implements the formalism in a different way, raising questions about the reproducibility of such predictions. We report the results of a community-wide effort that compared 15 solid-state codes, using 40 different potentials or basis set types, to assess the quality of the Perdew-Burke-Ernzerhof equations of state for 71 elemental crystals. We conclude that predictions from recent codes and pseudopotentials agree very well, with pairwise differences that are comparable to those between different high-precision experiments. Older methods, however, have less precise agreement. Our benchmark provides a framework for users and developers to document the precision of new applications and methodological improvements.</description>
<size>1210157</size>
</item><item>
<title>A New Algorithm for Processing Interferometric Data-Stacks: SqueeSAR</title>
<category>Paper</category>
<infohash>ff5db44de9703c5ab954e18944867da1fb0b0604</infohash>
<guid>https://academictorrents.com/details/ff5db44de9703c5ab954e18944867da1fb0b0604</guid>
<link>https://academictorrents.com/details/ff5db44de9703c5ab954e18944867da1fb0b0604</link>
<description>Permanent Scatterer SAR Interferometry (PSInSAR) aims to identify coherent radar targets exhibiting high phase stability over the entire observation time period. These targets often correspond to point-wise, man-made objects widely available over a city, but less present in non-urban areas. To overcome the limits of PSInSAR, analysis of interferometric data-stacks should aim at extracting geophysical parameters not only from point-wise deterministic objects (i.e., PS), but also from distributed scatterers (DS). Rather than developing hybrid processing chains where two or more algorithms are applied to the same data-stack, and results are then combined, in this paper we introduce a new approach, SqueeSAR, to jointly process PS and DS, taking into account their different statistical behavior. As it will be shown, PS and DS can be jointly processed without the need for significant changes to the traditional PSInSAR processing chain and without the need to unwrap hundreds of interferograms, provided that the coherence matrix associated with each DS is properly “squeezed” to provide a vector of optimum (wrapped) phase values. Results on real SAR data, acquired over an Alpine area, challenging for any InSAR analysis, confirm the effectiveness of this new approach.</description>
<size>1710318</size>
</item><item>
<title>Improvements to a \{MODIS\} global terrestrial evapotranspiration algorithm </title>
<category>Paper</category>
<infohash>1a5b1cd72229f9c5f8d81b50b1c0fc79768008b0</infohash>
<guid>https://academictorrents.com/details/1a5b1cd72229f9c5f8d81b50b1c0fc79768008b0</guid>
<link>https://academictorrents.com/details/1a5b1cd72229f9c5f8d81b50b1c0fc79768008b0</link>
<description>"\MODIS\ global evapotranspiration (ET) products by Mu et al. [Mu, Q., Heinsch, F. A., Zhao, M., Running, S. W. (2007). Development of a global evapotranspiration algorithm based on \MODIS\ and global meteorology data. Remote Sensing of Environment, 111, 519–536. doi: 10.1016/j.rse.2007.04.015] are the first regular 1-km2 land surface \ET\ dataset for the 109.03 Million km2 global vegetated land areas at an 8-day interval. In this study, we have further improved the \ET\ algorithm in Mu et al. (2007a, hereafter called old algorithm) by 1) simplifying the calculation of vegetation cover fraction; 2) calculating \ET\ as the sum of daytime and nighttime components; 3) adding soil heat flux calculation; 4) improving estimates of stomatal conductance, aerodynamic resistance and boundary layer resistance; 5) separating dry canopy surface from the wet; and 6) dividing soil surface into saturated wet surface and moist surface. We compared the improved algorithm with the old one both globally and locally at 46 eddy flux towers. The global annual total \ET\ over the vegetated land surface is 62.8 × 103 km3, agrees very well with other reported estimates of 65.5 × 103 km3 over the terrestrial land surface, which is much higher than 45.8 × 103 km3 estimated with the old algorithm. For \ET\ evaluation at eddy flux towers, the improved algorithm reduces mean absolute bias (MAE) of daily \ET\ from 0.39 mm day−1 to 0.33 mm day−1 driven by tower meteorological data, and from 0.40 mm day−1 to 0.31 mm day−1 driven by \GMAO\ data, a global meteorological reanalysis dataset. \MAE\ values by the improved \ET\ algorithm are 24.6% and 24.1% of the \ET\ measured from towers, within the range (10–30%) of the reported uncertainties in \ET\ measurements, implying an enhanced accuracy of the improved algorithm. Compared to the old algorithm, the improved algorithm increases the skill score with tower-driven \ET\ estimates from 0.50 to 0.55, and from 0.46 to 0.53 with GMAO-driven ET. Based on these results, the improved \ET\ algorithm has a better performance in generating global \ET\ data products, providing critical information on global terrestrial water and energy cycles and environmental changes. "</description>
<size>3006587</size>
</item><item>
<title>Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery </title>
<category>Paper</category>
<infohash>08df98f50351881f44adf8e0f75cd36107fa71fc</infohash>
<guid>https://academictorrents.com/details/08df98f50351881f44adf8e0f75cd36107fa71fc</guid>
<link>https://academictorrents.com/details/08df98f50351881f44adf8e0f75cd36107fa71fc</link>
<description>"In using traditional digital classification algorithms, a researcher typically encounters serious issues in identifying urban land cover classes employing high resolution data. A normal approach is to use spectral information alone and ignore spatial information and a group of pixels that need to be considered together as an object. We used QuickBird image data over a central region in the city of Phoenix, Arizona to examine if an object-based classifier can accurately identify urban classes. To demonstrate if spectral information alone is practical in urban classification, we used spectra of the selected classes from randomly selected points to examine if they can be effectively discriminated. The overall accuracy based on spectral information alone reached only about 63.33%. We employed five different classification procedures with the object-based paradigm that separates spatially and spectrally similar pixels at different scales. The classifiers to assign land covers to segmented objects used in the study include membership functions and the nearest neighbor classifier. The object-based classifier achieved a high overall accuracy (90.40%), whereas the most commonly used decision rule, namely maximum likelihood classifier, produced a lower overall accuracy (67.60%). This study demonstrates that the object-based classifier is a significantly better approach than the classical per-pixel classifiers. Further, this study reviews application of different parameters for segmentation and classification, combined use of composite and original bands, selection of different scale levels, and choice of classifiers. Strengths and weaknesses of the object-based prototype are presented and we provide suggestions to avoid or minimize uncertainties and limitations associated with the approach. "</description>
<size>3465408</size>
</item><item>
<title>Detecting trend and seasonal changes in satellite image time series </title>
<category>Paper</category>
<infohash>d81c7d0105017a1001dd741d0f02eec7006f9edb</infohash>
<guid>https://academictorrents.com/details/d81c7d0105017a1001dd741d0f02eec7006f9edb</guid>
<link>https://academictorrents.com/details/d81c7d0105017a1001dd741d0f02eec7006f9edb</link>
<description>"A wealth of remotely sensed image time series covering large areas is now available to the earth science community. Change detection methods are often not capable of detecting land cover changes within time series that are heavily influenced by seasonal climatic variations. Detecting change within the trend and seasonal components of time series enables the classification of different types of changes. Changes occurring in the trend component often indicate disturbances (e.g. fires, insect attacks), while changes occurring in the seasonal component indicate phenological changes (e.g. change in land cover type). A generic change detection approach is proposed for time series by detecting and characterizing Breaks For Additive Seasonal and Trend (BFAST). \BFAST\ integrates the decomposition of time series into trend, seasonal, and remainder components with methods for detecting change within time series. \BFAST\ iteratively estimates the time and number of changes, and characterizes change by its magnitude and direction. We tested \BFAST\ by simulating 16-day Normalized Difference Vegetation Index (NDVI) time series with varying amounts of seasonality and noise, and by adding abrupt changes at different times and magnitudes. This revealed that \BFAST\ can robustly detect change with different magnitudes (&amp;gt; 0.1 NDVI) within time series with different noise levels (0.01–0.07 σ) and seasonal amplitudes (0.1–0.5 NDVI). Additionally, \BFAST\ was applied to 16-day \NDVI\ Moderate Resolution Imaging Spectroradiometer (MODIS) composites for a forested study area in south eastern Australia. This showed that \BFAST\ is able to detect and characterize spatial and temporal changes in a forested landscape. \BFAST\ is not specific to a particular data type and can be applied to time series without the need to normalize for land cover types, select a reference period, or change trajectory. The method can be integrated within monitoring frameworks and used as an alarm system to flag when and where changes occur. "</description>
<size>1480729</size>
</item><item>
<title>An automated approach for reconstructing recent forest disturbance history using dense Landsat time series stacks </title>
<category>Paper</category>
<infohash>8e31455f8f43c6c534fb9529cb26a8afb2792b48</infohash>
<guid>https://academictorrents.com/details/8e31455f8f43c6c534fb9529cb26a8afb2792b48</guid>
<link>https://academictorrents.com/details/8e31455f8f43c6c534fb9529cb26a8afb2792b48</link>
<description>"A highly automated algorithm called vegetation change tracker (VCT) has been developed for reconstructing recent forest disturbance history using Landsat time series stacks (LTSS). This algorithm is based on the spectral–temporal properties of land cover and forest change processes, and requires little or no fine tuning for most forests with closed or near close canopy cover. It was found very efficient, taking 2–3 h on average to analyze an \LTSS\ consisting of 12 or more Landsat images using an average desktop PC. This LTSS-VCT approach has been used to examine disturbance patterns with a biennial temporal interval from 1984 to 2006 for many locations across the conterminous U.S. Accuracy assessment over 6 validation sites revealed that overall accuracies of around 80% were achieved for disturbances mapped at individual year level. Average user s and producer s accuracies of the disturbance classes were around 70% and 60% in 5 of the 6 sites, respectively, suggesting that although forest disturbances were typically rare as compared with no-change classes, on average the \VCT\ detected more than half of those disturbances with relatively low levels of false alarms. Field assessment revealed that \VCT\ was able to detect most stand clearing disturbance events, including harvest, fire, and urban development, while some non-stand clearing events such as thinning and selective logging were also mapped in western U.S. The applicability of the LTSS-VCT approach depends on the availability of a temporally adequate supply of Landsat imagery. To ensure that forest disturbance records can be developed continuously in the future, it is necessary to plan and develop observational capabilities today that will allow continuous acquisition of frequent Landsat or Landsat-like observations. "</description>
<size>1371482</size>
</item><item>
<title>Remote sensing of the urban heat island effect across biomes in the continental \{USA\} </title>
<category>Paper</category>
<infohash>a80245643c9369585155362700433bd498a10728</infohash>
<guid>https://academictorrents.com/details/a80245643c9369585155362700433bd498a10728</guid>
<link>https://academictorrents.com/details/a80245643c9369585155362700433bd498a10728</link>
<description>"Impervious surface area (ISA) from the Landsat TM-based \NLCD\ 2001 dataset and land surface temperature (LST) from \MODIS\ averaged over three annual cycles (2003–2005) are used in a spatial analysis to assess the urban heat island (UHI) skin temperature amplitude and its relationship to development intensity, size, and ecological setting for 38 of the most populous cities in the continental United States. Development intensity zones based on %ISA are defined for each urban area emanating outward from the urban core to the non-urban rural areas nearby and used to stratify sampling for land surface temperatures and NDVI. Sampling is further constrained by biome and elevation to insure objective intercomparisons between zones and between cities in different biomes permitting the definition of hierarchically ordered zones that are consistent across urban areas in different ecological setting and across scales. We find that ecological context significantly influences the amplitude of summer daytime \UHI\ (urban–rural temperature difference) the largest (8 °C average) observed for cities built in biomes dominated by temperate broadleaf and mixed forest. For all cities combined, \ISA\ is the primary driver for increase in temperature explaining 70% of the total variance in LST. On a yearly average, urban areas are substantially warmer than the non-urban fringe by 2.9 °C, except for urban areas in biomes with arid and semiarid climates. The average amplitude of the \UHI\ is remarkably asymmetric with a 4.3 °C temperature difference in summer and only 1.3 °C in winter. In desert environments, the LST s response to \ISA\ presents an uncharacteristic “U-shaped” horizontal gradient decreasing from the urban core to the outskirts of the city and then increasing again in the suburban to the rural zones. UHI s calculated for these cities point to a possible heat sink effect. These observational results show that the urban heat island amplitude both increases with city size and is seasonally asymmetric for a large number of cities across most biomes. The implications are that for urban areas developed within forested ecosystems the summertime \UHI\ can be quite high relative to the wintertime \UHI\ suggesting that the residential energy consumption required for summer cooling is likely to increase with urban growth within those biomes. "</description>
<size>1704736</size>
</item><item>
<title>Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr — Temporal segmentation algorithms </title>
<category>Paper</category>
<infohash>df48e8493c5fe72e25e7bc7edbdd96634ec3a67e</infohash>
<guid>https://academictorrents.com/details/df48e8493c5fe72e25e7bc7edbdd96634ec3a67e</guid>
<link>https://academictorrents.com/details/df48e8493c5fe72e25e7bc7edbdd96634ec3a67e</link>
<description>"We introduce and test LandTrendr (Landsat-based detection of Trends in Disturbance and Recovery), a new approach to extract spectral trajectories of land surface change from yearly Landsat time-series stacks (LTS). The method brings together two themes in time-series analysis of LTS: capture of short-duration events and smoothing of long-term trends. Our strategy is founded on the recognition that change is not simply a contrast between conditions at two points in time, but rather a continual process operating at both fast and slow rates on landscapes. This concept requires both new algorithms to extract change and new interpretation tools to validate those algorithms. The challenge is to resolve salient features of the time series while eliminating noise introduced by ephemeral changes in illumination, phenology, atmospheric condition, and geometric registration. In the LandTrendr approach, we use relative radiometric normalization and simple cloud screening rules to create on-the-fly mosaics of multiple images per year, and extract temporal trajectories of spectral data on a pixel-by-pixel basis. We then apply temporal segmentation strategies with both regression-based and point-to-point fitting of spectral indices as a function of time, allowing capture of both slowly-evolving processes, such as regrowth, and abrupt events, such as forest harvest. Because any temporal trajectory pattern is allowable, we use control parameters and threshold-based filtering to reduce the role of false positive detections. No suitable reference data are available to assess the role of these control parameters or to test overall algorithm performance. Therefore, we also developed a companion interpretation approach founded on the same conceptual framework of capturing both long and short-duration processes, and developed a software tool to apply this concept to expert interpretation and segmentation of spectral trajectories (TimeSync, described in a companion paper by Cohen et al., 2010). These data were used as a truth set against which to evaluate the behavior of the LandTrendr algorithms applied to three spectral indices. We applied the LandTrendr algorithms to several hundred points across western Oregon and Washington (U.S.A.). Because of the diversity of potential outputs from the \LTS\ data, we evaluated algorithm performance against summary metrics for disturbance, recovery, and stability, both for capture of events and longer-duration processes. Despite the apparent complexity of parameters, our results suggest a simple grouping of parameters along a single axis that balances the detection of abrupt events with capture of long-duration trends. Overall algorithm performance was good, capturing a wide range of disturbance and recovery phenomena, even when evaluated against a truth set that contained new targets (recovery and stability) with much subtler thresholds of change than available from prior validation datasets. Temporal segmentation of the archive appears to be a feasible and robust means of increasing information extraction from the Landsat archive. "</description>
<size>2657595</size>
</item><item>
<title>Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends </title>
<category>Paper</category>
<infohash>62250f5805b368e2b796157af5db13825b25be7a</infohash>
<guid>https://academictorrents.com/details/62250f5805b368e2b796157af5db13825b25be7a</guid>
<link>https://academictorrents.com/details/62250f5805b368e2b796157af5db13825b25be7a</link>
<description>"The knowledge of impervious surfaces, especially the magnitude, location, geometry, spatial pattern of impervious surfaces and the perviousness–imperviousness ratio, is significant to a range of issues and themes in environmental science central to global environmental change and human–environment interactions. Impervious surface data is important for urban planning and environmental and resources management. Therefore, remote sensing of impervious surfaces in the urban areas has recently attracted unprecedented attention. In this paper, various digital remote sensing approaches to extract and estimate impervious surfaces will be examined. Discussions will focus on the mapping requirements of urban impervious surfaces. In particular, the impacts of spatial, geometric, spectral, and temporal resolutions on the estimation and mapping will be addressed, so will be the selection of an appropriate estimation method based on remotely sensed data characteristics. This literature review suggests that major approaches over the past decade include pixel-based (image classification, regression, etc.), sub-pixel based (linear spectral unmixing, imperviousness as the complement of vegetation fraction etc.), object-oriented algorithms, and artificial neural networks. Techniques, such as data/image fusion, expert systems, and contextual classification methods, have also been explored. The majority of research efforts have been made for mapping urban landscapes at various scales and on the spatial resolution requirements of such mapping. In contrast, there is less interest in spectral and geometric properties of impervious surfaces. More researches are also needed to better understand temporal resolution, change and evolution of impervious surfaces over time, and temporal requirements for urban mapping. It is suggested that the models, methods, and image analysis algorithms in urban remote sensing have been largely developed for the imagery of medium resolution (10–100 m). The advent of high spatial resolution satellite images, spaceborne hyperspectral images, and LiDAR data is stimulating new research idea, and is driving the future research trends with new models and algorithms. "</description>
<size>1389740</size>
</item><item>
<title>Landsat-based inventory of glaciers in western Canada, 1985–2005 </title>
<category>Paper</category>
<infohash>38c86f2ab06ee403ea44f3b1ec2e5cf0c986d6e9</infohash>
<guid>https://academictorrents.com/details/38c86f2ab06ee403ea44f3b1ec2e5cf0c986d6e9</guid>
<link>https://academictorrents.com/details/38c86f2ab06ee403ea44f3b1ec2e5cf0c986d6e9</link>
<description>"We report on a glacier inventory for the Canadian Cordillera south of 60°N, across the two western provinces of British Columbia and Alberta, containing ~ 30,000 km2 of glacierized terrain. Our semi-automated method extracted glacier extents from Landsat Thematic Mapper (TM) scenes for 2005 and 2000 using a band ratio (TM3/TM5). We compared these extents with glacier cover for the mid-1980s from high-altitude, aerial photography for British Columbia and from Landsat \TM\ imagery for Alberta. A 25 m digital elevation model (DEM) helped to identify debris-covered ice and to split the glaciers into their respective drainage basins. The estimated mapping errors are 3–4% and arise primarily from seasonal snow cover. Glaciers in British Columbia and Alberta respectively lost − 10.8 ± 3.8% and − 25.4% ± 4.1% of their area over the period 1985–2005. The region-wide annual shrinkage rate of − 0.55% a− 1 is comparable to rates reported for other mountain ranges in the late twentieth century. Least glacierized mountain ranges with smaller glaciers lost the largest fraction of ice cover: the highest relative ice loss in British Columbia (− 24.0 ± 4.6%) occurred in the northern Interior Ranges, while glaciers in the northern Coast Mountains declined least (− 7.7 ± 3.4%). "</description>
<size>1441008</size>
</item><item>
<title>\{MODIS\} Collection 5 global land cover: Algorithm refinements and characterization of new datasets </title>
<category>Paper</category>
<infohash>651a3368f4826b92876833ae3e81422402a0c38c</infohash>
<guid>https://academictorrents.com/details/651a3368f4826b92876833ae3e81422402a0c38c</guid>
<link>https://academictorrents.com/details/651a3368f4826b92876833ae3e81422402a0c38c</link>
<description>"Information related to land cover is immensely important to global change science. In the past decade, data sources and methodologies for creating global land cover maps from remote sensing have evolved rapidly. Here we describe the datasets and algorithms used to create the Collection 5 \MODIS\ Global Land Cover Type product, which is substantially changed relative to Collection 4. In addition to using updated input data, the algorithm and ancillary datasets used to produce the product have been refined. Most importantly, the Collection 5 product is generated at 500-m spatial resolution, providing a four-fold increase in spatial resolution relative to the previous version. In addition, many components of the classification algorithm have been changed. The training site database has been revised, land surface temperature is now included as an input feature, and ancillary datasets used in post-processing of ensemble decision tree results have been updated. Further, methods used to correct classifier results for bias imposed by training data properties have been refined, techniques used to fuse ancillary data based on spatially varying prior probabilities have been revised, and a variety of methods have been developed to address limitations of the algorithm for the urban, wetland, and deciduous needleleaf classes. Finally, techniques used to stabilize classification results across years have been developed and implemented to reduce year-to-year variation in land cover labels not associated with land cover change. Results from a cross-validation analysis indicate that the overall accuracy of the product is about 75% correctly classified, but that the range in class-specific accuracies is large. Comparison of Collection 5 maps with Collection 4 results show substantial differences arising from increased spatial resolution and changes in the input data and classification algorithm. "</description>
<size>1635188</size>
</item><item>
<title>Point Set Registration: Coherent Point Drift</title>
<category>Paper</category>
<infohash>e280d7e19118f144162e7be7dba37f70c12c26ed</infohash>
<guid>https://academictorrents.com/details/e280d7e19118f144162e7be7dba37f70c12c26ed</guid>
<link>https://academictorrents.com/details/e280d7e19118f144162e7be7dba37f70c12c26ed</link>
<description>Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.</description>
<size>6608516</size>
</item><item>
<title>Survey of Pedestrian Detection for Advanced Driver Assistance Systems</title>
<category>Paper</category>
<infohash>e0a66424039e93e898f2daefb030a0ad33645ebd</infohash>
<guid>https://academictorrents.com/details/e0a66424039e93e898f2daefb030a0ad33645ebd</guid>
<link>https://academictorrents.com/details/e0a66424039e93e898f2daefb030a0ad33645ebd</link>
<description>Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one&amp;mdash;after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges.</description>
<size>4587734</size>
</item><item>
<title>Fast Keypoint Recognition Using Random Ferns</title>
<category>Paper</category>
<infohash>f0ca867da5c453100f005c362e3d6009edcfc9e6</infohash>
<guid>https://academictorrents.com/details/f0ca867da5c453100f005c362e3d6009edcfc9e6</guid>
<link>https://academictorrents.com/details/f0ca867da5c453100f005c362e3d6009edcfc9e6</link>
<description>While feature point recognition is a key component of modern approaches to object detection, existing approaches require computationally expensive patch preprocessing to handle perspective distortion. In this paper, we show that formulating the problem in a naive Bayesian classification framework makes such preprocessing unnecessary and produces an algorithm that is simple, efficient, and robust. Furthermore, it scales well as the number of classes grows. To recognize the patches surrounding keypoints, our classifier uses hundreds of simple binary features and models class posterior probabilities. We make the problem computationally tractable by assuming independence between arbitrary sets of features. Even though this is not strictly true, we demonstrate that our classifier nevertheless performs remarkably well on image data sets containing very significant perspective changes.</description>
<size>4263729</size>
</item><item>
<title>Efficient Additive Kernels via Explicit Feature Maps</title>
<category>Paper</category>
<infohash>282fa545c9051a074b6ed777e79bc1c2423b7c23</infohash>
<guid>https://academictorrents.com/details/282fa545c9051a074b6ed777e79bc1c2423b7c23</guid>
<link>https://academictorrents.com/details/282fa545c9051a074b6ed777e79bc1c2423b7c23</link>
<description>Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger s, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom s approximation of Perronnin et al. [1], which is data dependent, and the explicit map of Maji and Berg [2] for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101 [3], Daimler-Chrysler pedestrians [4], and INRIA pedestrians [5].</description>
<size>1444352</size>
</item><item>
<title>In the Eye of the Beholder: A Survey of Models for Eyes and Gaze</title>
<category>Paper</category>
<infohash>e5a2f435414faebcf45965689cc0e10c6c39704c</infohash>
<guid>https://academictorrents.com/details/e5a2f435414faebcf45965689cc0e10c6c39704c</guid>
<link>https://academictorrents.com/details/e5a2f435414faebcf45965689cc0e10c6c39704c</link>
<description>Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.</description>
<size>1524162</size>
</item><item>
<title>Product Quantization for Nearest Neighbor Search</title>
<category>Paper</category>
<infohash>8f98718faf0b777f8bcf4224b7a965a67c2cd915</infohash>
<guid>https://academictorrents.com/details/8f98718faf0b777f8bcf4224b7a965a67c2cd915</guid>
<link>https://academictorrents.com/details/8f98718faf0b777f8bcf4224b7a965a67c2cd915</link>
<description>This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.</description>
<size>1950353</size>
</item><item>
<title>DAISY: An Efficient Dense Descriptor Applied to Wide-Baseline Stereo</title>
<category>Paper</category>
<infohash>2c89ab9d7211db9f18790ad2a001902edeaf6c16</infohash>
<guid>https://academictorrents.com/details/2c89ab9d7211db9f18790ad2a001902edeaf6c16</guid>
<link>https://academictorrents.com/details/2c89ab9d7211db9f18790ad2a001902edeaf6c16</link>
<description>In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these.</description>
<size>6721035</size>
</item><item>
<title>Visual Word Ambiguity</title>
<category>Paper</category>
<infohash>19ceec256e7aad90b6339af265d10bf68f67634f</infohash>
<guid>https://academictorrents.com/details/19ceec256e7aad90b6339af265d10bf68f67634f</guid>
<link>https://academictorrents.com/details/19ceec256e7aad90b6339af265d10bf68f67634f</link>
<description>This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.</description>
<size>3462518</size>
</item><item>
<title>Representation Learning: A Review and New Perspectives</title>
<category>Paper</category>
<infohash>1fc5e55522c0d5a16346df0b83a45ac2d8b5010a</infohash>
<guid>https://academictorrents.com/details/1fc5e55522c0d5a16346df0b83a45ac2d8b5010a</guid>
<link>https://academictorrents.com/details/1fc5e55522c0d5a16346df0b83a45ac2d8b5010a</link>
<description>The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.</description>
<size>1215615</size>
</item><item>
<title>Faster and Better: A Machine Learning Approach to Corner Detection</title>
<category>Paper</category>
<infohash>bdc435dc1ca872db0defb6e5b2aeb9a35d728186</infohash>
<guid>https://academictorrents.com/details/bdc435dc1ca872db0defb6e5b2aeb9a35d728186</guid>
<link>https://academictorrents.com/details/bdc435dc1ca872db0defb6e5b2aeb9a35d728186</link>
<description>The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is important because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection and, using machine learning, we derive a feature detector from this which can fully process live PAL video using less than 5 percent of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115 percent, SIFT 195 percent). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that, despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and of very high quality.</description>
<size>3069990</size>
</item><item>
<title>Tracking-Learning-Detection</title>
<category>Paper</category>
<infohash>7f9babce18d0bad43b0e3d56cd52dd8bce9b7dd1</infohash>
<guid>https://academictorrents.com/details/7f9babce18d0bad43b0e3d56cd52dd8bce9b7dd1</guid>
<link>https://academictorrents.com/details/7f9babce18d0bad43b0e3d56cd52dd8bce9b7dd1</link>
<description>This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object s location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector s errors and updates it to avoid these errors in the future. We study how to identify the detector s errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of “experts”: (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.</description>
<size>2796666</size>
</item><item>
<title>Pedestrian Detection: An Evaluation of the State of the Art</title>
<category>Paper</category>
<infohash>aebf0cb5996e65368bc1a2911e6abdf72e6ff652</infohash>
<guid>https://academictorrents.com/details/aebf0cb5996e65368bc1a2911e6abdf72e6ff652</guid>
<link>https://academictorrents.com/details/aebf0cb5996e65368bc1a2911e6abdf72e6ff652</link>
<description>Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.</description>
<size>3570040</size>
</item><item>
<title>Context-Aware Saliency Detection</title>
<category>Paper</category>
<infohash>f99e18ff843cb91077a9e63e78d3c655dc310cec</infohash>
<guid>https://academictorrents.com/details/f99e18ff843cb91077a9e63e78d3c655dc310cec</guid>
<link>https://academictorrents.com/details/f99e18ff843cb91077a9e63e78d3c655dc310cec</link>
<description>We propose a new type of saliency&amp;#x2014;context-aware saliency&amp;#x2014;which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting, we demonstrate that using our saliency prevents distortions in the important regions. In summarization, we show that our saliency helps to produce compact, appealing, and informative summaries.</description>
<size>2778051</size>
</item><item>
<title>CS231n: Convolutional Neural Networks for Visual Recognition 2016</title>
<category>Course</category>
<infohash>46c5af9e2075d9af06f280b55b65cf9b44eb9fe7</infohash>
<guid>https://academictorrents.com/details/46c5af9e2075d9af06f280b55b65cf9b44eb9fe7</guid>
<link>https://academictorrents.com/details/46c5af9e2075d9af06f280b55b65cf9b44eb9fe7</link>
<description>Course Description Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. The final assignment will involve training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet). We will focus on teaching how to set up the problem of image recognition, the learning algorithms (e.g. backpropagation), practical engineering tricks for training and fine-tuning the networks and guide the students through hands-on assignments and a final course project. Much of the background and materials of this course will be drawn from the ImageNet Challenge.</description>
<size>10730933005</size>
</item><item>
<title>priceExtractionDataset</title>
<category>Dataset</category>
<infohash>95d245e3413f4bb8923b04b277749f041f443f6d</infohash>
<guid>https://academictorrents.com/details/95d245e3413f4bb8923b04b277749f041f443f6d</guid>
<link>https://academictorrents.com/details/95d245e3413f4bb8923b04b277749f041f443f6d</link>
<description/>
<size>4078689</size>
</item><item>
<title>Neural Networks Video Lectures - Hugo Larochelle</title>
<category>Course</category>
<infohash>e046bca3bc837053d1609ef33d623ee5c5af7300</infohash>
<guid>https://academictorrents.com/details/e046bca3bc837053d1609ef33d623ee5c5af7300</guid>
<link>https://academictorrents.com/details/e046bca3bc837053d1609ef33d623ee5c5af7300</link>
<description>Here is the list of topics covered in the course, segmented over 10 weeks. Each week is associated with explanatory video clips and recommended readings. 0. Introduction and math revision 1. Feedforward neural network 2. Training neural networks 3. Conditional random fields 4. Training CRFs 5. Restricted Boltzmann machine 6. Autoencoders 7. Deep learning 8. Sparse coding 9. Computer vision 10. Natural language processing</description>
<size>6600745142</size>
</item><item>
<title>Coursera - Neural Networks for Machine Learning - Geoffrey Hinton</title>
<category>Course</category>
<infohash>743c16a18756557a67478a7570baf24a59f9cda6</infohash>
<guid>https://academictorrents.com/details/743c16a18756557a67478a7570baf24a59f9cda6</guid>
<link>https://academictorrents.com/details/743c16a18756557a67478a7570baf24a59f9cda6</link>
<description>[Watch an intro video here](http://www.youtube.com/watch?v=KuPai0ogiHk) ##About the Course Neural networks use learning algorithms that are inspired by our understanding of how the brain learns, but they are evaluated by how well they work for practical applications such as speech recognition, object recognition, image retrieval and the ability to recommend products that a user will like. As computers become more powerful, Neural Networks are gradually taking over from simpler Machine Learning methods. They are already at the heart of a new generation of speech recognition devices and they are beginning to outperform earlier systems for recognizing objects in images. The course will explain the new learning procedures that are responsible for these advances, including effective new proceduresr for learning multiple layers of non-linear features, and give you the skills and understanding required to apply these procedures in many other domains. This YouTube video gives examples of the kind of material that will be in the course, but the course will present this material at a much gentler rate and with more examples. ##Recommended Background Programming proficiency in Matlab, Octave or Python. Enough knowledge of calculus to be able to differentiate simple functions. Enough knowledge of linear algebra to understand simple equations involving vectors and matrices. Enough knowledge of probability theory to understand what a probability density is. ##Course Format The class will consist of lecture videos, which are between 5 and 15 minutes in length. These contain 1-3 integrated quiz questions per video. There will also be standalone homework that is not part of video lectures, optional programming assignments, and a (not optional) final test. ##FAQ Will I get a certificate after completing this class? Yes. Students who successfully complete the class will receive a certificate signed by the instructor. What resources will I need for this class? You will need access to a computer that you can use to experiment with learning algorithms written in Matlab, Octave or Python. If you use Matlab you will need your own licence. What is the coolest thing I ll learn if I take this class? You will learn how a neural network can generate a plausible completion of almost any sentence.</description>
<size>927486283</size>
</item><item>
<title>The New College Vision and Laser Data Set</title>
<category>Dataset</category>
<infohash>9e738f5ef5f1412974ab793f315450bc8da76e73</infohash>
<guid>https://academictorrents.com/details/9e738f5ef5f1412974ab793f315450bc8da76e73</guid>
<link>https://academictorrents.com/details/9e738f5ef5f1412974ab793f315450bc8da76e73</link>
<description>##About Data collection site: New College, Oxford The New College Data Set contains 30GB of data intended for use by the mobile robotics and vision research communities. Our anticipated users are parties interested in outdoor 6 D.O.F. navigation and mapping (metric or topological) using vision and/or laser. ##Data Synopsis * Gathered while traversing 2.2km through a college s grounds and adjoining parks * 5 DOF dead reckoned pose at 28Hz * Stereo imagery captured at 20Hz * 5-view omni-directional images at 5Hz * range and intensity data from two vertically mounted SICK LMS 291-S14 lasers scanning at 75Hz ##Full Data Set Here we provide the data set in full. The data set consists of three streams of information: two sets of images, one from the Stereo Camera, and one from the Ladybug Camera, and an Alog file which contains other information like laser data and odometry. The image sets come in two formats. Firstly, we provide them as single zip files (via ftp) containing all the images. These are large files due to the number of images, so the second format is the image sets broken down into ~500MB chunks (via http). Image Sets, Multiple Zip Files * (http) Zipped Stereo Images (**) * (http) Zipped Ladybug Images * (http) Zipped Panorama Images(***) Alog File * (http) Alog File (220M) Visual Odometry Data [14-06-10] Please note that this data was not originally part of the IJRR submission. * (http) Visual Odometry (VO) data (7M) (**) Please note that the stereo images contain lens distortion and are unrectified. This makes them unsuitable for direct use in, for example, local window-based stereo matching. Code is provided that will perform undisortion and rectification on these image sequences. Download details and instructions for use are given: Code to undistort and rectify stereo image sequence. (***) Panoramas are provided as a quick overview of the ladybug image data. Each image is a concatenation of the 5 camera images, all captured at the same time. We suggest using the raw ladybug images as these correspond to the files referred to in the .alog file. ##Extracting Data From The Plain-text "alog files" All non-image data is stored in a plain-text log file called an "alog". To get acquainted with this format and suitable access methods we suggest you: 1. Download some data evaluation snippets, particularly the alog which is the text file containing all non-image data, and the Flash Laser Viewer. The data set is large and users may not wish to download 30G data and start using the parsing tools immediately so we have provided some short partitions of the alog file. 2. Download and compile (as a standalone executable or from within Matlab©) the alog parsing tool we provide called uAlogParser. Full instructions on how to compile this single source file on multiple platforms are given here. 3. Try running the parser on one of the smaller evaluation alog segments. If you have access to Matlab©, try running the point cloud builder on an evaluation chunk to see some 3D laser data. 4. Download the full alog file - perhaps after finding an event that appears interesting.</description>
<size>29018175633</size>
</item><item>
<title>Movies Fight Detection Dataset</title>
<category>Dataset</category>
<infohash>70e0794e2292fc051a13f05ea6f5b6c16f3d3635</infohash>
<guid>https://academictorrents.com/details/70e0794e2292fc051a13f05ea6f5b6c16f3d3635</guid>
<link>https://academictorrents.com/details/70e0794e2292fc051a13f05ea6f5b6c16f3d3635</link>
<description>Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.</description>
<size>409966355</size>
</item><item>
<title>Hockey Fight Detection Dataset</title>
<category>Dataset</category>
<infohash>38d9ed996a5a75a039b84cf8a137be794e7cee89</infohash>
<guid>https://academictorrents.com/details/38d9ed996a5a75a039b84cf8a137be794e7cee89</guid>
<link>https://academictorrents.com/details/38d9ed996a5a75a039b84cf8a137be794e7cee89</link>
<description>Whereas the action recognition community has focused mostly on detecting simple actions like clapping, walking or jogging, the detection of fights or in general aggressive behaviors has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like in prisons, psychiatric or elderly centers or even in camera phones. After an analysis of previous approaches we test the well-known Bag-of-Words framework used for action recognition in the specific problem of fight detection, along with two of the best action descriptors currently available: STIP and MoSIFT. For the purpose of evaluation and to foster research on violence detection in video we introduce a new video database containing 1000 sequences divided in two groups: fights and non-fights. Experiments on this database and another one with fights from action movies show that fights can be detected with near 90% accuracy.</description>
<size>171330668</size>
</item><item>
<title>Online News Popularity Data Set </title>
<category>Dataset</category>
<infohash>95d3b03397a0bafd74a662fe13ba3550c13b7ce1</infohash>
<guid>https://academictorrents.com/details/95d3b03397a0bafd74a662fe13ba3550c13b7ce1</guid>
<link>https://academictorrents.com/details/95d3b03397a0bafd74a662fe13ba3550c13b7ce1</link>
<description>##Data Set Information: * The articles were published by Mashable (www.mashable.com) and their content as the rights to reproduce it belongs to them. Hence, this dataset does not share the original content but some statistics associated with it. The original content be publicly accessed and retrieved using the provided urls. * Acquisition date: January 8, 2015 * The estimated relative performance values were estimated by the authors using a Random Forest classifier and a rolling windows as assessment method. See their article for more details on how the relative performance values were set. ##Attribute Information: Number of Attributes: 61 (58 predictive attributes, 2 non-predictive, 1 goal field) 0. url: URL of the article (non-predictive) 1. timedelta: Days between the article publication and the dataset acquisition (non-predictive) 2. n_tokens_title: Number of words in the title 3. n_tokens_content: Number of words in the content 4. n_unique_tokens: Rate of unique words in the content 5. n_non_stop_words: Rate of non-stop words in the content 6. n_non_stop_unique_tokens: Rate of unique non-stop words in the content 7. num_hrefs: Number of links 8. num_self_hrefs: Number of links to other articles published by Mashable 9. num_imgs: Number of images 10. num_videos: Number of videos 11. average_token_length: Average length of the words in the content 12. num_keywords: Number of keywords in the metadata 13. data_channel_is_lifestyle: Is data channel  Lifestyle ? 14. data_channel_is_entertainment: Is data channel  Entertainment ? 15. data_channel_is_bus: Is data channel  Business ? 16. data_channel_is_socmed: Is data channel  Social Media ? 17. data_channel_is_tech: Is data channel  Tech ? 18. data_channel_is_world: Is data channel  World ? 19. kw_min_min: Worst keyword (min. shares) 20. kw_max_min: Worst keyword (max. shares) 21. kw_avg_min: Worst keyword (avg. shares) 22. kw_min_max: Best keyword (min. shares) 23. kw_max_max: Best keyword (max. shares) 24. kw_avg_max: Best keyword (avg. shares) 25. kw_min_avg: Avg. keyword (min. shares) 26. kw_max_avg: Avg. keyword (max. shares) 27. kw_avg_avg: Avg. keyword (avg. shares) 28. self_reference_min_shares: Min. shares of referenced articles in Mashable 29. self_reference_max_shares: Max. shares of referenced articles in Mashable 30. self_reference_avg_sharess: Avg. shares of referenced articles in Mashable 31. weekday_is_monday: Was the article published on a Monday? 32. weekday_is_tuesday: Was the article published on a Tuesday? 33. weekday_is_wednesday: Was the article published on a Wednesday? 34. weekday_is_thursday: Was the article published on a Thursday? 35. weekday_is_friday: Was the article published on a Friday? 36. weekday_is_saturday: Was the article published on a Saturday? 37. weekday_is_sunday: Was the article published on a Sunday? 38. is_weekend: Was the article published on the weekend? 39. LDA_00: Closeness to LDA topic 0 40. LDA_01: Closeness to LDA topic 1 41. LDA_02: Closeness to LDA topic 2 42. LDA_03: Closeness to LDA topic 3 43. LDA_04: Closeness to LDA topic 4 44. global_subjectivity: Text subjectivity 45. global_sentiment_polarity: Text sentiment polarity 46. global_rate_positive_words: Rate of positive words in the content 47. global_rate_negative_words: Rate of negative words in the content 48. rate_positive_words: Rate of positive words among non-neutral tokens 49. rate_negative_words: Rate of negative words among non-neutral tokens 50. avg_positive_polarity: Avg. polarity of positive words 51. min_positive_polarity: Min. polarity of positive words 52. max_positive_polarity: Max. polarity of positive words 53. avg_negative_polarity: Avg. polarity of negative words 54. min_negative_polarity: Min. polarity of negative words 55. max_negative_polarity: Max. polarity of negative words 56. title_subjectivity: Title subjectivity 57. title_sentiment_polarity: Title polarity 58. abs_title_subjectivity: Absolute subjectivity level 59. abs_title_sentiment_polarity: Absolute polarity level 60. shares: Number of shares (target) ##Relevant Papers: K. Fernandes, P. Vinagre and P. Cortez. A Proactive Intelligent Decision Support System for Predicting the Popularity of Online News. Proceedings of the 17th EPIA 2015 - Portuguese Conference on Artificial Intelligence, September, Coimbra, Portugal. ##Citation Request: K. Fernandes, P. Vinagre and P. Cortez. A Proactive Intelligent Decision Support System for Predicting the Popularity of Online News. Proceedings of the 17th EPIA 2015 - Portuguese Conference on Artificial Intelligence, September, Coimbra, Portugal. ##Source: Kelwin Fernandes (kafc â€˜@â€™ inesctec.pt, kelwinfc â€™@â€™ gmail.com) - INESC TEC, Porto, Portugal/Universidade do Porto, Portugal. Pedro Vinagre (pedro.vinagre.sousa â€™@â€™ gmail.com) - ALGORITMI Research Centre, Universidade do Minho, Portugal Paulo Cortez - ALGORITMI Research Centre, Universidade do Minho, Portugal Pedro Sernadela - Universidade de Aveiro</description>
<size>7476401</size>
</item><item>
<title>Educational Process Mining (EPM): A Learning Analytics Data Set Data Set </title>
<category>Dataset</category>
<infohash>e24e083cc337695bb84a2b68707695579c0ab4d8</infohash>
<guid>https://academictorrents.com/details/e24e083cc337695bb84a2b68707695579c0ab4d8</guid>
<link>https://academictorrents.com/details/e24e083cc337695bb84a2b68707695579c0ab4d8</link>
<description>## Data Set Information: The experiments have been carried out with a group of 115 students of first-year, undergraduate Engineering major of the University of Genoa. We carried out this study over a simulation environment named Deeds (Digital Electronics Education and Design Suite) which is used for e-learning in digital electronics. The environment provides learning materials through specialized browsers for the students, and asks them to solve various problems with different levels of difficulty. For more information about the Deeds simulator used for this course look at: [Web Link] and to know more about the exercises contents of each session see  exercises_info.txt . Our data set contains the students  time series of activities during six sessions of laboratory sessions of the course of digital electronics. There are 6 folders containing the studentsâ€™ data per session. Each  Session  folder contains up to 99 CSV files each dedicated to a specific student log during that session. The number of files in each folder changes due to the number of students present in each session. Each file contains 13 features. See  features_info.txt  for more details. For the details of activities performed by the students during the course, see  activities_info.txt  The data set includes the following files:</description>
<size>4934446</size>
</item><item>
<title>Alkane-induced expression, substrate binding profile, and immunolocalization of a cytochrome P450 encoded on the nifD excision element of Anabaena 7120</title>
<category>Paper</category>
<infohash>60b8852af8e4a43c20bdf014d3e7b6af0dea0538</infohash>
<guid>https://academictorrents.com/details/60b8852af8e4a43c20bdf014d3e7b6af0dea0538</guid>
<link>https://academictorrents.com/details/60b8852af8e4a43c20bdf014d3e7b6af0dea0538</link>
<description>Background Alkanes have been hypothesized to act as universal inducers of bacterial cytochrome P450 gene expression. We tested this hypothesis on an unusual P450 gene (cyp110) found on a conserved 11 kilobase episomal DNA element of unknown function found in filamentous cyanobacteria. We also monitored the binding of potential substrates to the P450 protein and explored the distribution of P450 protein in vegetative cells and nitrogen-fixing heterocysts using immuno-electron microscopy. Results Hexadecane treatments resulted in a two-fold increase in mRNA, and a four-fold increase in P450 protein levels relative to control cultures. Hexane, octane and dodecane were toxic and induced substantial changes in membrane morphology. Long-chain saturated and unsaturated fatty acids were shown to bind the CYP110 protein using a spectroscopic spin-shift assay, but alkanes did not bind. CYP110 protein was detected in vegetative cells but not in differentiated heterocysts where nitrogen fixation occurs. Conclusion Hexadecane treatment was an effective inducer of CYP110 expression in cyanobacteria. Based on substrate binding profiles and amino acid sequence similarities it is hypothesized that CYP110 is a fatty acid ω-hydroxylase in photosynthetic cells. CYP110 was found associated with membrane fractions unlike other soluble microbial P450 proteins, and in this regard CYP110 more closely resembles eukarytotic P450s. Substrate stablization is an unlikely mechanism for alkane induction because alkanes did not bind to purified CYP110 protein.</description>
<size>1440342</size>
</item><item>
<title>Step &amp; flash imprint lithography</title>
<category>Paper</category>
<infohash>183e19cf648c1b175c5b3f70a492080dd3e5ae76</infohash>
<guid>https://academictorrents.com/details/183e19cf648c1b175c5b3f70a492080dd3e5ae76</guid>
<link>https://academictorrents.com/details/183e19cf648c1b175c5b3f70a492080dd3e5ae76</link>
<description>Douglas J. Resnicka,Motorola Labs, 2100 East Elliot Road, Tempe, AZ 85284, USA S.V. Sreenivasanb, Molecular Imprints, Inc., 1807-C West Braker Ln., Austin, TX 78758, USA C. Grant Willson, Department of Chemical Engineering, The University of Texas at Austin, Austin, TX 78712, USA Douglas J. Resnicka, , S.V. Sreenivasanb, C. Grant Willsonc</description>
<size>2587852</size>
</item><item>
<title>Optimizing pentose utilization in yeast: the need for novel tools and approaches</title>
<category>Paper</category>
<infohash>02b757edb47bdf2aa675eee50cda633497ead81f</infohash>
<guid>https://academictorrents.com/details/02b757edb47bdf2aa675eee50cda633497ead81f</guid>
<link>https://academictorrents.com/details/02b757edb47bdf2aa675eee50cda633497ead81f</link>
<description>Hexose and pentose cofermentation is regarded as one of the chief obstacles impeding economical conversion of lignocellulosic biomass to biofuels. Over time, successful application of traditional metabolic engineering strategy has produced yeast strains capable of utilizing the pentose sugars (especially xylose and arabinose) as sole carbon sources, yet major difficulties still remain for engineering simultaneous, exogenous sugar metabolism. Beyond catabolic pathways, the focus must shift towards non-traditional aspects of cellular engineering such as host molecular transport capability, catabolite sensing and stress response mechanisms. This review highlights the need for an approach termed  panmetabolic engineering , a new paradigm for integrating new carbon sources into host metabolic pathways. This approach will concurrently optimize the interdependent processes of transport and metabolism using novel combinatorial techniques and global cellular engineering. As a result, panmetabolic engineering is a whole pathway approach emphasizing better pathways, reduced glucose-induced repression and increased product tolerance. In this paper, recent publications are reviewed in light of this approach and their potential to expand metabolic engineering tools. Collectively, traditional approaches and panmetabolic engineering enable the reprogramming of extant biological complexity and incorporation of exogenous carbon catabolism.</description>
<size>421627</size>
</item><item>
<title>Applications of Synthetic Biology in Microbial Biotechnology</title>
<category>Paper</category>
<infohash>a8d25d03ff868d0aef41ccb9124ac621cd967f41</infohash>
<guid>https://academictorrents.com/details/a8d25d03ff868d0aef41ccb9124ac621cd967f41</guid>
<link>https://academictorrents.com/details/a8d25d03ff868d0aef41ccb9124ac621cd967f41</link>
<description>1Department of Chemical Engineering, The University of Texas at Austin, 1 University Station, C0400, Austin, TX 78712, USA 2Department of Chemical Engineering, The Pennsylvania State University, 226A Fenske Laboratory, University Park, PA 16802-4400, USA 3Molecular Biotechnology, Jacobs University gGmbH, School of Engineering and Science, Research II - Room 113, Campus Ring 1, 28759 Bremen, Germany 4Department of Chemical and Biomolecular Engineering, University of Maryland, 1208D, Chemical and Nuclear Engineering Building 090, College Park, MD 20742, USA</description>
<size>1076446</size>
</item><item>
<title>Synthetic Biology: Tools to Design, Build, and Optimize Cellular Processes</title>
<category>Paper</category>
<infohash>03fd3cba845a8d252d9768806486f004d7f4e374</infohash>
<guid>https://academictorrents.com/details/03fd3cba845a8d252d9768806486f004d7f4e374</guid>
<link>https://academictorrents.com/details/03fd3cba845a8d252d9768806486f004d7f4e374</link>
<description>The general central dogma frames the emergent properties of life, which make biology both necessary and difficult to engineer. In a process engineering paradigm, each biological process stream and process unit is heavily influenced by regulatory interactions and interactions with the surrounding environment. Synthetic biology is developing the tools and methods that will increase control over these interactions, eventually resulting in an integrative synthetic biology that will allow ground-up cellular optimization. In this review, we attempt to contextualize the areas of synthetic biology into three tiers: (1) the process units and associated streams of the central dogma, (2) the intrinsic regulatory mechanisms, and (3) the extrinsic physical and chemical environment. Efforts at each of these three tiers attempt to control cellular systems and take advantage of emerging tools and approaches. Ultimately, it will be possible to integrate these approaches and realize the vision of integrative synthetic biology when cells are completely rewired for biotechnological goals. This review will highlight progress towards this goal as well as areas requiring further research.</description>
<size>1305853</size>
</item><item>
<title>Terrestrial Ecological Systems of the United States (Version 3.0; Updated March 2014)</title>
<category>Dataset</category>
<infohash>f1f67ca3faef718afcc35a530eebbd72c20b0eac</infohash>
<guid>https://academictorrents.com/details/f1f67ca3faef718afcc35a530eebbd72c20b0eac</guid>
<link>https://academictorrents.com/details/f1f67ca3faef718afcc35a530eebbd72c20b0eac</link>
<description>Overview: NatureServe ecologists lead efforts to develop internationally standardized classifications for terrestrial ecosystems and vegetation.  One classification approach is terrestrial ecological systems, mid- to local- scale ecological units useful for standardized mapping and conservation assessments of habitat diversity and landscape conditions. Each ecological system type describes complexes of plant communities influenced by similar physical environments and dynamic ecological processes (like fire or flooding). The classification defines some 800 units across the United States and has provided an effective means of mapping ecological concepts at regional/national scales in greater detail than was previously possible. ![](http://i.imgur.com/ygP3gfS.jpg)</description>
<size>3988579396</size>
</item><item>
<title>Labeled Fishes in the Wild</title>
<category>Dataset</category>
<infohash>41bc10c77d54b49fb0a96ff5d4a0814bc2ab7da7</infohash>
<guid>https://academictorrents.com/details/41bc10c77d54b49fb0a96ff5d4a0814bc2ab7da7</guid>
<link>https://academictorrents.com/details/41bc10c77d54b49fb0a96ff5d4a0814bc2ab7da7</link>
<description>The labeled fishes in the wild image dataset is provided by NOAA Fisheries (National Marine Fisheries Service) to encourage development, testing, and performance assessment of automated image analysis algorithms for unconstrained underwater imagery. The dataset includes images of fish, invertebrates, and the seabed that were collected using camera systems deployed on a remotely operated vehicle (ROV) for fisheries surveys. Annotation data are included in accompanying data files (.dat, .vec, and .info) that describe the locations of the marked fish targets in the images. The manuscript (Cutter et al., 2015) demonstrates methods for automated detection of fish based on classifiers developed using the training image dataset, and evaluated using the test set. This dataset is offered for further development of detection of fish or invertebrates in complex environments; tracking of multiple animal targets in video image sequences; recognition and classification of animal species; measurement of animals in stereo image pairs; and characterization of seabed habitats. Recommended citation: Cutter, G.; Stierhoff, K.; Zeng, J. (2015) "Automated detection of rockfish in unconstrained underwater videos using Haar cascades and a new image dataset: labeled fishes in the wild," IEEE Winter Conference on Applications of Computer Vision Workshops, pp. 57-62. The NOAA scientists who are stewards of these data may have archives of images that can provide additional opportunities for collaboration to apply and assess algorithms. Credit for use of these datasets should be provided in publications, as described in the “how-to-cite.txt” documents included in the dataset archive or as shown above. ##Dataset Labeled Fishes in the Wild image dataset (v. 1.1) (Download 423 MB). Labeled fishes in the wild has three components: a training and validation positive image set (verified fish), a negative image set (non-fish), and a test image set. The training and test sets have accompanying annotation data that define the location and extent of each marked fish target object in the images. These represent bounding rectangles defined by expert analysts, and are in the format of .dat files used by OpenCV. Training and validation positive image set: contains images of rockfish (Sebastes spp.) and other associated species near the seabed, collected using a forward-oblique-looking digital still camera deployed on a remotely operated vehicle (ROV) by the Southwest Fisheries Science Center during surveys of rocky seabed environments offshore of southern California. Still frames from these cameras represent instances during a survey where the ROV was moving slowly, and motion effects are not a factor. The training set comprises 929 image files, containing 1005 marked fish with associated annotations (their marked locations and bounding rectangles). The marks define fish of various species, sizes, and ranges to the camera, and includes portions of different background composition. Training and validation negative image set: includes 3167 images. The 147 seabed negative images provided in the downloadable archive were extracted from the labeled fishes in the wild training and test image sets (regions containing no fish were extracted). The remaining 3020 images are available from the tutorial on OpenCV HaarTraining, and available from the data negatives directory. Test image set: contains an image sequence collected using the ROV’s high-definition (HD; 1080i) video camera during a near-seabed survey of fish. The test imagery for detection comprises video footage from ROV surveys. The video clip (“TEST_VIDEO_ROV10.mp4”; 210 frames at 3 frames per second (fps)) used to evaluate detectors for this study represents every 10th frame of the original video sequence (2-minute duration, approximately 30 fps). All fish targets are annotated for the 210-frame, 3fps test video. Annotations of fish in the test video include a descriptor, “verified” or “apparent,” where verified indicates that a video analyst could identify the fish as such, and apparent objects were believed to be fish, but were not verifiable based on attributes visible in a single frame. These apparent fish may appear as faint blobs in the distance. These distinctions are made in the annotation data because we believe that some classifiers will detect these apparent fish, but we do not expect the classifier to do so; nor do we necessarily want the detector to do so. That is, if a classifier is detecting those apparent fish, then it is probably detecting many other non-fish targets in the images, thereby making it inefficient and impractical. A total of 2061 fish objects were marked in the annotated frames of the dataset test video. Of those, 1008 were verified fish, and 1053 were apparent fish. During the sequence the ROV is moving; the background appears to be moving and is illuminated from different directions (as the ROV moves and rotates); small particles in the water current stream past; fish are still or moving at various speeds; fish are oriented in many directions; some fish are hidden partially behind rocks or in crevices; some indistinct fish-like objects appear in the distance. The original Labeled fishes in the wild dataset (v1.0, Dec. 2014) contained only the decimated test video sequence ("Test_ROV_video_h264_decim.mp4") that contained only the marked frames from the original video. One tenth of the frames of the full frame-rate video were marked for locations of fish targets. This version of the dataset (v1.1, Jan. 2015) also contains the full test video sequence ("Test_ROV_video_h264_full.mp4"). Both the full and decimated videos have accompanying text files with analyst marks (following OpenCV .dat file conventions). Generally, for m marks, the format is: Video-filename(frame#) #-of-marks x1 y1 w1 h1 x2 y2 w2 h2 ... xm ym wm hm. For example, in the case of two marks, the final eight values define the bounding rectangles: Test_ROV_video_h264_full.mp4(fr_14) 2 1021 362 94 63 953 289 90 61. The marks file for the decimated video ("Test_ROV_video_h264_decim_marks.dat") indicates the frame number for the decimated and full sequence, e.g. Test_ROV_video_h264_decim.mp4(fr_1)(fullfr_14) 2 1021 362 94 63 953 289 90 61. There are 2101 frames in the full video and 210 frames in the decimated video, but 206 frames were marked; i.e. a few of the examined frames did not contain fish. Contact: george dot cutter at noaa dot gov. ![](http://i.imgur.com/U9Y7nNc.gif)</description>
<size>444365962</size>
</item><item>
<title>BuzzFeed News transcription of Airbnb NYC data</title>
<category>Dataset</category>
<infohash>968a3ff5e4182cdecd239980ecfd257a37451003</infohash>
<guid>https://academictorrents.com/details/968a3ff5e4182cdecd239980ecfd257a37451003</guid>
<link>https://academictorrents.com/details/968a3ff5e4182cdecd239980ecfd257a37451003</link>
<description>"The data in this spreadsheet were transcribed from the dataset referenced in Airbnb s Dec. 1, 2015, blog post. Because Airbnb did not allow the data to be downloaded, photographed, or copy-pasted, BuzzFeed News copied the data manually over a series of three visits with the company. Some of the worksheets have not been copied in full; ""[...]"" indicates that a particular column of data continues in the original, but were not transcribed. To the fullest extent possible, BuzzFeed News attempted to avoid transcription errors; some, however, may have snuck through."</description>
<size>192919</size>
</item><item>
<title>NYPD 7 Major Felony Incidents</title>
<category>Dataset</category>
<infohash>5c195d570d910402727638f4ba123d171694fbdc</infohash>
<guid>https://academictorrents.com/details/5c195d570d910402727638f4ba123d171694fbdc</guid>
<link>https://academictorrents.com/details/5c195d570d910402727638f4ba123d171694fbdc</link>
<description>Quarterly update of Seven Major Felonies at the incident level. For privacy reasons, incidents have been moved to the midpoint of the street segment on which they occur.</description>
<size>13233486</size>
</item><item>
<title>Vincent van Gogh Paintings</title>
<category>Dataset</category>
<infohash>c8b687c984d3d902310f27d56759ed69f5e1b4a7</infohash>
<guid>https://academictorrents.com/details/c8b687c984d3d902310f27d56759ed69f5e1b4a7</guid>
<link>https://academictorrents.com/details/c8b687c984d3d902310f27d56759ed69f5e1b4a7</link>
<description>"Vincent Willem van Gogh was a Dutch post-Impressionist painter whose work had far-reaching influence on 20th-century art. His paintings include portraits, self portraits, landscapes, still lifes of cypresses, wheat fields and sunflowers."  - Wikipedia ##Sample Images: ![](http://i.imgur.com/QbWnjct.png)</description>
<size>513763028</size>
</item><item>
<title>LUNA16-CSVfiles</title>
<category>Dataset</category>
<infohash>0f97ce1fa054ad5269bd675e3ad9ad599cd67e66</infohash>
<guid>https://academictorrents.com/details/0f97ce1fa054ad5269bd675e3ad9ad599cd67e66</guid>
<link>https://academictorrents.com/details/0f97ce1fa054ad5269bd675e3ad9ad599cd67e66</link>
<description/>
<size>55565607</size>
</item><item>
<title>LUNA16-subset9</title>
<category>Dataset</category>
<infohash>7e2b443f36fd1ec0425d29c0883c35f367f8ebd6</infohash>
<guid>https://academictorrents.com/details/7e2b443f36fd1ec0425d29c0883c35f367f8ebd6</guid>
<link>https://academictorrents.com/details/7e2b443f36fd1ec0425d29c0883c35f367f8ebd6</link>
<description/>
<size>6699650017</size>
</item><item>
<title>LUNA16-subset8</title>
<category>Dataset</category>
<infohash>5571fad650069e27067994e4bbd4293e97d479aa</infohash>
<guid>https://academictorrents.com/details/5571fad650069e27067994e4bbd4293e97d479aa</guid>
<link>https://academictorrents.com/details/5571fad650069e27067994e4bbd4293e97d479aa</link>
<description/>
<size>6025767505</size>
</item><item>
<title>LUNA16-subset7</title>
<category>Dataset</category>
<infohash>a277728a89b5185b8dccec50f325a2c2adb45eb0</infohash>
<guid>https://academictorrents.com/details/a277728a89b5185b8dccec50f325a2c2adb45eb0</guid>
<link>https://academictorrents.com/details/a277728a89b5185b8dccec50f325a2c2adb45eb0</link>
<description/>
<size>6313598213</size>
</item><item>
<title>LUNA16-subset6</title>
<category>Dataset</category>
<infohash>11b702cca1dc12694344f2f9140c1283331668c4</infohash>
<guid>https://academictorrents.com/details/11b702cca1dc12694344f2f9140c1283331668c4</guid>
<link>https://academictorrents.com/details/11b702cca1dc12694344f2f9140c1283331668c4</link>
<description/>
<size>6531050274</size>
</item><item>
<title>LUNA16-subset5</title>
<category>Dataset</category>
<infohash>9daa9e3daba301ba8b17faac5c05fde0266126d0</infohash>
<guid>https://academictorrents.com/details/9daa9e3daba301ba8b17faac5c05fde0266126d0</guid>
<link>https://academictorrents.com/details/9daa9e3daba301ba8b17faac5c05fde0266126d0</link>
<description/>
<size>6610460097</size>
</item><item>
<title>LUNA16-subset4</title>
<category>Dataset</category>
<infohash>205abf692da7846ec384fea4a0ab464ba7356bbd</infohash>
<guid>https://academictorrents.com/details/205abf692da7846ec384fea4a0ab464ba7356bbd</guid>
<link>https://academictorrents.com/details/205abf692da7846ec384fea4a0ab464ba7356bbd</link>
<description/>
<size>6856144330</size>
</item><item>
<title>A collection of sport activity files for data analysis and data mining 2016a</title>
<category>Dataset</category>
<infohash>af55533bf8229c3bff260b77a652f8b8058f6c9e</infohash>
<guid>https://academictorrents.com/details/af55533bf8229c3bff260b77a652f8b8058f6c9e</guid>
<link>https://academictorrents.com/details/af55533bf8229c3bff260b77a652f8b8058f6c9e</link>
<description>Dataset consists of seven cyclists, who upload their activities to Strava and Garmin Connect profiles. Typically, these activities can be downloaded as a GPX format, which basically presents an XML format. Following features of each training can be extracted: GPS location, elevation, duration, distance, heart rate and even power.</description>
<size>245469985</size>
</item><item>
<title>LUNA16-subset3</title>
<category>Dataset</category>
<infohash>eaf01bd491ea491f7992f840be2805d73863a95e</infohash>
<guid>https://academictorrents.com/details/eaf01bd491ea491f7992f840be2805d73863a95e</guid>
<link>https://academictorrents.com/details/eaf01bd491ea491f7992f840be2805d73863a95e</link>
<description/>
<size>6896620114</size>
</item><item>
<title>LUNA16-subset2</title>
<category>Dataset</category>
<infohash>c6856fe17f5f22ffd9c81a486a0df9d0a1b6050e</infohash>
<guid>https://academictorrents.com/details/c6856fe17f5f22ffd9c81a486a0df9d0a1b6050e</guid>
<link>https://academictorrents.com/details/c6856fe17f5f22ffd9c81a486a0df9d0a1b6050e</link>
<description/>
<size>7257937108</size>
</item><item>
<title>LUNA16-subset1</title>
<category>Dataset</category>
<infohash>e2a65577c143cea0cec9319a46dd4c85a90e9643</infohash>
<guid>https://academictorrents.com/details/e2a65577c143cea0cec9319a46dd4c85a90e9643</guid>
<link>https://academictorrents.com/details/e2a65577c143cea0cec9319a46dd4c85a90e9643</link>
<description/>
<size>6334778552</size>
</item><item>
<title>LUNA16-subset0</title>
<category>Dataset</category>
<infohash>d3f859ec025cc730a7e7a0214eaaa15e66db9a24</infohash>
<guid>https://academictorrents.com/details/d3f859ec025cc730a7e7a0214eaaa15e66db9a24</guid>
<link>https://academictorrents.com/details/d3f859ec025cc730a7e7a0214eaaa15e66db9a24</link>
<description/>
<size>6811924508</size>
</item><item>
<title>Software for Combinatorial Power Series</title>
<category>Paper</category>
<infohash>9ad99856448c630fdbed581b99d458db1d2a461d</infohash>
<guid>https://academictorrents.com/details/9ad99856448c630fdbed581b99d458db1d2a461d</guid>
<link>https://academictorrents.com/details/9ad99856448c630fdbed581b99d458db1d2a461d</link>
<description>Generating functions (i.e.\ power series) have applications throughout enumerative and analytic combinatorics. In this document we present Genfunlib, a new Mathematica package containing a selection of implementations of symbolic methods related to generating functions. With Genfunlib one can find the generating functions for regular languages, compute the initial terms of a generating function, convert between generating function equations and recurrences, and find asymptotics. This document gives mathematical background, extensive documentation for Genfunlib, and tutorials.</description>
<size>578646</size>
</item><item>
<title>YASP December 2015 500k Data Dump</title>
<category>Dataset</category>
<infohash>384a08fd7918cd59b23fb0c3cf3cf1aea3ea4d42</infohash>
<guid>https://academictorrents.com/details/384a08fd7918cd59b23fb0c3cf3cf1aea3ea4d42</guid>
<link>https://academictorrents.com/details/384a08fd7918cd59b23fb0c3cf3cf1aea3ea4d42</link>
<description>This is a dump from December 2015 of the last 500,000 matches parsed on yasp.co.</description>
<size>13650705615</size>
</item><item>
<title>YASP 3.5 Million Data Dump</title>
<category>Dataset</category>
<infohash>5c5deeb6cfe1c944044367d2e7465fd8bd2f4acf</infohash>
<guid>https://academictorrents.com/details/5c5deeb6cfe1c944044367d2e7465fd8bd2f4acf</guid>
<link>https://academictorrents.com/details/5c5deeb6cfe1c944044367d2e7465fd8bd2f4acf</link>
<description>This is a data dump of all the parsed matches from yasp.co (as of mid December 2015). This is about 3.5 million matches.</description>
<size>99326677717</size>
</item><item>
<title>Automatic detection of sub-km craters in high resolution planetary images </title>
<category>Paper</category>
<infohash>f527ed81bcc8c7d7fe04d075cee2c9d65f0bcb66</infohash>
<guid>https://academictorrents.com/details/f527ed81bcc8c7d7fe04d075cee2c9d65f0bcb66</guid>
<link>https://academictorrents.com/details/f527ed81bcc8c7d7fe04d075cee2c9d65f0bcb66</link>
<description>Impact craters are among the most studied geomorphic planetary features because they yield information about the past geological processes and provide a tool for measuring relative ages of observed geologic formations. Surveying impact craters is an important task which traditionally has been achieved by means of visual inspection of images. The shear number of smaller craters present in high resolution images makes visual counting of such craters impractical. In this paper we present a method that brings together a novel, efficient crater identification algorithm with a data processing pipeline; together they enable a fully automatic detection of sub-km craters in large panchromatic images. The technical details of the method are described and its performance is evaluated using a large, 12.5 m/pixel image centered on the Nanedi Valles on Mars. The detection percentage of the method is ∼ 70 % . The system detects over 35,000 craters in this image; average crater density is 0.5 craters / km 2 , but localized spots of much higher crater density are present. The method is designed to produce “million craters” global catalogs of sub-km craters on Mars and other planets wherever high resolution images are available. Such catalogs could be utilized for deriving high spatial resolution and high temporal precision stratigraphy on regional or even planetary scale.</description>
<size>5068206</size>
</item><item>
<title>Texas Road Network</title>
<category>Dataset</category>
<infohash>224c0ec354dbf703a2cabf00bfcb14b420c5cb90</infohash>
<guid>https://academictorrents.com/details/224c0ec354dbf703a2cabf00bfcb14b420c5cb90</guid>
<link>https://academictorrents.com/details/224c0ec354dbf703a2cabf00bfcb14b420c5cb90</link>
<description>From http://snap.stanford.edu/data/roadNet-TX.html Dataset information This is a road network of Texas. Intersections and endpoints are represented by nodes, and the roads connecting these intersections or endpoints are represented by undirected edges. Dataset statistics Nodes: 1379917 Edges: 1921660 Nodes in largest WCC: 1351137 (0.979) Edges in largest WCC: 1879201 (0.978) Nodes in largest SCC: 1351137 (0.979) Edges in largest SCC: 1879201 (0.978) Average clustering coefficient: 0.0470 Number of triangles: 82869 Fraction of closed triangles: 0.02091 Diameter (longest shortest path): 1054 90-percentile effective diameter: 6.7e+02 Source (citation) J. Leskovec, K. Lang, A. Dasgupta, M. Mahoney. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. Internet Mathematics 6(1) 29&amp;mdash;123, 2009.</description>
<size>12442024</size>
</item><item>
<title>Pennsylvania Road Network</title>
<category>Dataset</category>
<infohash>16a16a4fbf5342d644326a6eef258e5499cf8328</infohash>
<guid>https://academictorrents.com/details/16a16a4fbf5342d644326a6eef258e5499cf8328</guid>
<link>https://academictorrents.com/details/16a16a4fbf5342d644326a6eef258e5499cf8328</link>
<description>From http://snap.stanford.edu/data/roadNet-PA.html Dataset information This is a road network of Pennsylvania. Intersections and endpoints are represented by nodes, and the roads connecting these intersections or endpoints are represented by undirected edges. Dataset statistics Nodes: 1088092 Edges: 1541898 Nodes in largest WCC: 1087562 (1.000) Edges in largest WCC: 1541514 (1.000) Nodes in largest SCC: 1087562 (1.000) Edges in largest SCC: 1541514 (1.000) Average clustering coefficient: 0.0465 Number of triangles: 67150 Fraction of closed triangles: 0.02062 Diameter (longest shortest path): 786 90-percentile effective diameter: 5.3e+02 Source (citation) J. Leskovec, K. Lang, A. Dasgupta, M. Mahoney. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. Internet Mathematics 6(1) 29&amp;mdash;123, 2009.</description>
<size>9945340</size>
</item><item>
<title>California Road Network</title>
<category>Dataset</category>
<infohash>0fa73e4f646b3e3258e7af3e22d651a2cf342de7</infohash>
<guid>https://academictorrents.com/details/0fa73e4f646b3e3258e7af3e22d651a2cf342de7</guid>
<link>https://academictorrents.com/details/0fa73e4f646b3e3258e7af3e22d651a2cf342de7</link>
<description>From http://snap.stanford.edu/data/roadNet-CA.html Dataset information A road network of California. Intersections and endpoints are represented by nodes and the roads connecting these intersections or road endpoints are represented by undirected edges. Dataset statistics Nodes: 1965206 Edges: 2766607 Nodes in largest WCC: 1957027 (0.996) Edges in largest WCC: 2760388 (0.998) Nodes in largest SCC: 1957027 (0.996) Edges in largest SCC: 2760388 (0.998) Average clustering coefficient: 0.0464 Number of triangles: 120676 Fraction of closed triangles: 0.02097 Diameter (longest shortest path): 849 90-percentile effective diameter: 5e+02 Source (citation) J. Leskovec, K. Lang, A. Dasgupta, M. Mahoney. Community Structure in Large Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters. Internet Mathematics 6(1) 29&amp;mdash;123, 2009.</description>
<size>17892860</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2012 (VOC2012) Complete Dataset</title>
<category>Dataset</category>
<infohash>f6ddac36ac7ae2ef79dc72a26a065b803c9c7230</infohash>
<guid>https://academictorrents.com/details/f6ddac36ac7ae2ef79dc72a26a065b803c9c7230</guid>
<link>https://academictorrents.com/details/f6ddac36ac7ae2ef79dc72a26a065b803c9c7230</link>
<description>Introduction The main goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Annotation was performed according to a set of guidelines distributed to all annotators. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Images for the action classification task are disjoint from those of the classification/detection/segmentation tasks. They have been partially annotated with people, bounding boxes, reference points and their actions. Annotation was performed according to a set of guidelines distributed to all annotators. Images for the person layout taster, where the test set is disjoint from the main tasks, have been additionally annotated with parts of the people (head/hands/feet). The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008-2011 challenges, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. Statistics of the database are online.</description>
<size>2000150528</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2011 (VOC2011) Complete Dataset</title>
<category>Dataset</category>
<infohash>408e318ba27031a533c709b7d696e34637bcfc0e</infohash>
<guid>https://academictorrents.com/details/408e318ba27031a533c709b7d696e34637bcfc0e</guid>
<link>https://academictorrents.com/details/408e318ba27031a533c709b7d696e34637bcfc0e</link>
<description>Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online. Annotation was performed according to a set of guidelines distributed to all annotators. The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008-2010 challenges, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 28,952 images. Further statistics are online.</description>
<size>1771402240</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2010 (VOC2010) Complete Dataset</title>
<category>Dataset</category>
<infohash>96db21675f464480780637f1416477ac14a81107</infohash>
<guid>https://academictorrents.com/details/96db21675f464480780637f1416477ac14a81107</guid>
<link>https://academictorrents.com/details/96db21675f464480780637f1416477ac14a81107</link>
<description>Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online. Annotation was performed according to a set of guidelines distributed to all annotators. The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008/VOC2009 challenges, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 21,738 images. Further statistics are online. Best Practice The VOC challenge encourages two types of participation: (i) methods which are trained using only the provided "trainval" (training + validation) data; (ii) methods built or trained using any data except the provided test data, for example commercial systems. In both cases the test data must be used strictly for reporting of results alone - it must not be used in any way to train or tune systems, for example by runing multiple parameter choices and reporting the best results obtained. If using the training data we provide as part of the challenge development kit, all development, e.g. feature selection and parameter tuning, must use the "trainval" (training + validation) set alone. One way is to divide the set into training and validation sets (as suggested in the development kit). Other schemes e.g. n-fold cross-validation are equally valid. The tuned algorithms should then be run only once on the test data. In VOC2007 we made all annotations available (i.e. for training, validation and test data) but since then we have not made the test annotations available. Instead, results on the test data are submitted to an evaluation server. Since algorithms should only be run once on the test data we strongly discourage multiple submissions to the server (and indeed the number of submissions for the same algorithm is strictly controlled), as the evaluation server should not be used for parameter tuning. We encourage you to publish test results always on the latest release of the challenge, using the output of the evaluation server. If you wish to compare methods or design choices e.g. subsets of features, then there are two options: (i) use the entire VOC2007 data, where all annotations are available; (ii) report cross-validation results using the latest "trainval" set alone.</description>
<size>1345332224</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2009 (VOC2009) Complete Dataset</title>
<category>Dataset</category>
<infohash>e2209d95a13d364aad0811eacbf391a10c37d963</infohash>
<guid>https://academictorrents.com/details/e2209d95a13d364aad0811eacbf391a10c37d963</guid>
<link>https://academictorrents.com/details/e2209d95a13d364aad0811eacbf391a10c37d963</link>
<description>Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online. Annotation was performed according to a set of guidelines distributed to all annotators. The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008 challenge, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 14,743 images. Further statistics are online.</description>
<size>935792128</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2008 (VOC2008) Complete Dataset</title>
<category>Dataset</category>
<infohash>577c99c831a03753c38764201123cbc5e9e3c03b</infohash>
<guid>https://academictorrents.com/details/577c99c831a03753c38764201123cbc5e9e3c03b</guid>
<link>https://academictorrents.com/details/577c99c831a03753c38764201123cbc5e9e3c03b</link>
<description>Data To download the training/validata data, see the development kit. In total there are 10,057 images [further statistics]. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. Annotation was performed according to a set of guidelines distributed to all annotators. The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2007 challenge, no ground truth for the test data will be released until after the challenge is complete. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 10,057 images. Further statistics are online - statistics for the test data will be released after the challenge. Development Kit The development kit consists of the training/validation data, MATLAB code for reading the annotation data, support files, and example implementations for each competition. Download the training/validation data (550MB tar file) - includes patch of 14-Jul-2008 Download the development kit code and documentation (250KB tar file) Patch 14-Jul-08 There were errors in the 14-Apr-2008 release of the training/validation data as follows: image labels in x_train/x_trainval.txt (classification task) did not include the "don t care" (zero) label the test set for the main challenge (classification/detection) included images used for the layout challenge - these will be ignored in the evaluation some images contained only "difficult" objects - these will be ignored in the evaluation (classification/detection) The errors will not affect evaluation, but participants wanting to take advantage of the "don t care" label (without having to compute it themselves) should download the patch, which contains updated image lists, and can be untarred over the original development kit: Running on VOC2007 test data If at all possible, participants are requested to submit results for both the VOC2008 and VOC2007 test sets provided in the test data, to allow comparison of results across the years. In both cases, the VOC2008 training/validation data should be used for training i.e. Train on VOC2008 train+val, test on VOC2008 test. Train on VOC2008 train+val, test on VOC2007 test. The updated development kit provides a switch to select between test sets. Results are placed in two directories, results/VOC2007/ or results/VOC2008/ according to the test set. Publication Policy The main mechanism for dissemination of the results will be the challenge webpage. For VOC2008, the detailed output of each submitted method will be published online e.g. per-image confidence for the classification task, and bounding boxes for the detection task. The intention is to assist others in the community in carrying out detailed analysis and comparison with their own methods. The published results will not be anonymous - by submitting results, participants are agreeing to have their results shared online. Acknowledgements We gratefully acknowledge the following, who spent many long hours providing annotation for the VOC2008 database: Jan-Hendrik Becker, Patrick Buehler, Kian Ming Chai, Miha Drenik, Chris Engels, Jan Van Gemert, Hedi Harzallah, Nicolas Heess, Zdenek Kalal, Lubor Ladicky, Marcin Marszalek, Alastair Moore, Maria-Elena Nilsback, Paul Sturgess, David Tingdahl, Hirofumi Uemura, Martin Vogt. Support The preparation and running of this challenge is supported by the EU-funded PASCAL Network of Excellence on Pattern Analysis, Statistical Modelling and Computational Learning.</description>
<size>581262336</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2006 (VOC2006) Complete Dataset</title>
<category>Dataset</category>
<infohash>db06b76152c0bf475af4093538e5a8d0e7971273</infohash>
<guid>https://academictorrents.com/details/db06b76152c0bf475af4093538e5a8d0e7971273</guid>
<link>https://academictorrents.com/details/db06b76152c0bf475af4093538e5a8d0e7971273</link>
<description>Details of the contributor of each image can be found in the file "contrib.txt" included in the database. Categories	Views of bicycles, buses, cats, cars, cows, dogs, horses, motorbikes, people, sheep in arbitrary pose. Number of images	5,304 Number of annotated images	5,304</description>
<size>2019317760</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2005 (VOC2005) Complete Dataset</title>
<category>Dataset</category>
<infohash>f758e9f976e3742b1349bf4b42e985b6ce1299ce</infohash>
<guid>https://academictorrents.com/details/f758e9f976e3742b1349bf4b42e985b6ce1299ce</guid>
<link>https://academictorrents.com/details/f758e9f976e3742b1349bf4b42e985b6ce1299ce</link>
<description>Categories	Views of motorbikes, bicycles, people, and cars in arbitrary pose. Number of images	1578 Number of annotated images	1578</description>
<size>847607556</size>
</item><item>
<title>PASCAL Visual Object Classes Challenge 2007 (VOC2007) Complete Dataset</title>
<category>Dataset</category>
<infohash>c9db37df1eb2e549220dc19f70f60f7786d067d4</infohash>
<guid>https://academictorrents.com/details/c9db37df1eb2e549220dc19f70f60f7786d067d4</guid>
<link>https://academictorrents.com/details/c9db37df1eb2e549220dc19f70f60f7786d067d4</link>
<description>==Introduction The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: Person: person Animal: bird, cat, cow, dog, horse, sheep Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor There will be two main competitions, and two smaller scale "taster" competitions.</description>
<size>923801600</size>
</item><item>
<title>Structured Web Data Extraction Dataset (SWDE)</title>
<category>Dataset</category>
<infohash>411576c7e80787e4b40452360f5f24acba9b5159</infohash>
<guid>https://academictorrents.com/details/411576c7e80787e4b40452360f5f24acba9b5159</guid>
<link>https://academictorrents.com/details/411576c7e80787e4b40452360f5f24acba9b5159</link>
<description>## Motivation This dataset is a real-world web page collection used for research on the automatic extraction of structured data (e.g., attribute-value pairs of entities) from the Web. We hope it could serve as a useful benchmark for evaluating and comparing different methods for structured web data extraction. ## Contents of the Dataset Currently the dataset involves: 8 verticals with diverse semantics; 80 web sites (10 per vertical); 124,291 web pages (200 ~ 2,000 per web site), each containing a single data record with detailed information of an entity; 32 attributes (3 ~ 5 per vertical) associated with carefully labeled ground-truth of corresponding values in each web page. The goal of structured data extraction is to automatically identify the values of these attributes from web pages. The involved verticals are summarized as follows: |Vertical  |	#Sites|	#Pages|	#Attributes|	Attributes| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-| |Auto	|10|	17,923	|4	|model, price, engine, fuel_economy| |Book	|10	|20,000	|5	|title, author, isbn_13, publisher, publication_date| |Camera	|10	  |5,258|	3	|model, price, manufacturer| |Job	|10	|20,000	|4|	title, company, location, date_posted| |Movie	|10|	20,000	|4	|title, director, genre, mpaa_rating| |NBA Player	|10	|  4,405	|4|	name, team, height, weight| |Restaurant	|10	|20,000	|4	|name, address, phone, cuisine| |University	|10	|16,705	|4|	name, phone, website, type| # Format of Web Pages Each web page in the dataset is stored as one .htm file (in UTF-8 encoding) where the first tag encodes the source URL of the page. # Format of Ground-truth Files For each web site, the page-level ground-truth of attribute values has been labeled using handcrafted regular expressions and stored in .txt files (in UTF-8 encoding) named as: "&lt;vertical&gt;-&lt;site&gt;-&lt;attribute&gt;.txt". # In each such file: The first line stores the names of vertical, site, and attribute, separated by TAB characters ( \t ). The second line stores some statistics (separated by TABs) w.r.t. the corresponding site and attribute, including: * the total number of pages, * the number of pages containing attribute values, * the total number of attribute values contained in the pages, * the number of unique attribute values. Each remaining line stores the ground-truth information (separated by TABs) of one page, in sequence of: *page ID, *the number of attribute values in the page, *attribute values ("&lt;NULL&gt;" in case of non-existence). Notes on Ground-truth Labeling The ground-truth labeling was conducted in the DOM-node level. More specifically, the candidate attribute values in a web page are the non-empty strings contained in text nodes in the corresponding DOM tree. One page (although containing a single data record) may contain multiple distinct values that correspond to an attribute (e.g., multiple authors of a book, multiple granularity levels of addresses). Currently, when a text node presents a mixture of multiple attributes, its string value is labeled with each of these attributes, if no substitute is available. Before being stored in .txt files, the raw attribute values were refined by removing redundant separators (e.g.,    ,  \t ,  \n ). ## Reference We would appreciate it if you cite the following paper when using the dataset: Qiang Hao, Rui Cai, Yanwei Pang, and Lei Zhang. "From One Tree to a Forest: a Uniﬁed Solution for Structured Web Data Extraction". in Proc. of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011), pp.775-784, Beijing, China. July 24-28, 2011. ## Contact If ﻿you have questions about this dataset, please contact Qiang Hao (haoq@live.com, Homepage).</description>
<size>207314582</size>
</item><item>
<title>OP-ELM: Optimally Pruned Extreme Learning Machine</title>
<category>Paper</category>
<infohash>39954f98dabbc877cb89baec71c9e9c7f90fb246</infohash>
<guid>https://academictorrents.com/details/39954f98dabbc877cb89baec71c9e9c7f90fb246</guid>
<link>https://academictorrents.com/details/39954f98dabbc877cb89baec71c9e9c7f90fb246</link>
<description>In this brief, the optimally pruned extreme learning machine (OP-ELM) methodology is presented. It is based on the original extreme learning machine (ELM) algorithm with additional steps to make it more robust and generic. The whole methodology is presented in detail and then applied to several regression and classification problems. Results for both computational time and accuracy (mean square error) are compared to the original ELM and to three other widely used methodologies: multilayer perceptron (MLP), support vector machine (SVM), and Gaussian process (GP). As the experiments for both regression and classification illustrate, the proposed OP-ELM methodology performs several orders of magnitude faster than the other algorithms used in this brief, except the original ELM. Despite the simplicity and fast performance, the OP-ELM is still able to maintain an accuracy that is comparable to the performance of the SVM. A toolbox for the OP-ELM is publicly available online.</description>
<size>265729</size>
</item><item>
<title>Global Synchronization for Discrete-Time Stochastic Complex Networks With Randomly Occurred Nonlinearities and Mixed Time Delays</title>
<category>Paper</category>
<infohash>376694bc675910dbdec98df5de98d209e87041aa</infohash>
<guid>https://academictorrents.com/details/376694bc675910dbdec98df5de98d209e87041aa</guid>
<link>https://academictorrents.com/details/376694bc675910dbdec98df5de98d209e87041aa</link>
<description>In this paper, the problem of stochastic synchronization analysis is investigated for a new array of coupled discrete-time stochastic complex networks with randomly occurred nonlinearities (RONs) and time delays. The discrete-time complex networks under consideration are subject to: (1) stochastic nonlinearities that occur according to the Bernoulli distributed white noise sequences; (2) stochastic disturbances that enter the coupling term, the delayed coupling term as well as the overall network; and (3) time delays that include both the discrete and distributed ones. Note that the newly introduced RONs and the multiple stochastic disturbances can better reflect the dynamical behaviors of coupled complex networks whose information transmission process is affected by a noisy environment (e.g., Internet-based control systems). By constructing a novel Lyapunov-like matrix functional, the idea of delay fractioning is applied to deal with the addressed synchronization analysis problem. By employing a combination of the linear matrix inequality (LMI) techniques, the free-weighting matrix method and stochastic analysis theories, several delay-dependent sufficient conditions are obtained which ensure the asymptotic synchronization in the mean square sense for the discrete-time stochastic complex networks with time delays. The criteria derived are characterized in terms of LMIs whose solution can be solved by utilizing the standard numerical software. A simulation example is presented to show the effectiveness and applicability of the proposed results.</description>
<size>923798</size>
</item><item>
<title>Second-Order Consensus for Multiagent Systems With Directed Topologies and Nonlinear Dynamics</title>
<category>Paper</category>
<infohash>d1a2d1684dbdb88c5a35577940b5cbac7fe00d57</infohash>
<guid>https://academictorrents.com/details/d1a2d1684dbdb88c5a35577940b5cbac7fe00d57</guid>
<link>https://academictorrents.com/details/d1a2d1684dbdb88c5a35577940b5cbac7fe00d57</link>
<description>This paper considers a second-order consensus problem for multiagent systems with nonlinear dynamics and directed topologies where each agent is governed by both position and velocity consensus terms with a time-varying asymptotic velocity. To describe the system s ability for reaching consensus, a new concept about the generalized algebraic connectivity is defined for strongly connected networks and then extended to the strongly connected components of the directed network containing a spanning tree. Some sufficient conditions are derived for reaching second-order consensus in multiagent systems with nonlinear dynamics based on algebraic graph theory, matrix theory, and Lyapunov control approach. Finally, simulation examples are given to verify the theoretical analysis.</description>
<size>326659</size>
</item><item>
<title>Extreme Learning Machine for Regression and Multiclass Classification</title>
<category>Paper</category>
<infohash>9cecadb5dad292a3121c018b161d5ef19c64540b</infohash>
<guid>https://academictorrents.com/details/9cecadb5dad292a3121c018b161d5ef19c64540b</guid>
<link>https://academictorrents.com/details/9cecadb5dad292a3121c018b161d5ef19c64540b</link>
<description>Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the “generalized” single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.</description>
<size>1274126</size>
</item><item>
<title>Improving Bag-of-Features for Large Scale Image Search</title>
<category>Paper</category>
<infohash>a74f6007905e2debb69b590badef7d9f3367a74f</infohash>
<guid>https://academictorrents.com/details/a74f6007905e2debb69b590badef7d9f3367a74f</guid>
<link>https://academictorrents.com/details/a74f6007905e2debb69b590badef7d9f3367a74f</link>
<description>This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets.</description>
<size>4726793</size>
</item><item>
<title>The Pascal Visual Object Classes (VOC) Challenge</title>
<category>Paper</category>
<infohash>fa69bd239afdedbcb0ab75d5cdee366e1db0f4b7</infohash>
<guid>https://academictorrents.com/details/fa69bd239afdedbcb0ab75d5cdee366e1db0f4b7</guid>
<link>https://academictorrents.com/details/fa69bd239afdedbcb0ab75d5cdee366e1db0f4b7</link>
<description>The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.</description>
<size>8557871</size>
</item><item>
<title>Image Super-Resolution Via Sparse Representation</title>
<category>Paper</category>
<infohash>82eae8cd4f209ff7f6223de12567a8fd4ae167a2</infohash>
<guid>https://academictorrents.com/details/82eae8cd4f209ff7f6223de12567a8fd4ae167a2</guid>
<link>https://academictorrents.com/details/82eae8cd4f209ff7f6223de12567a8fd4ae167a2</link>
<description>This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.</description>
<size>1845477</size>
</item><item>
<title>Guided Image Filtering</title>
<category>Paper</category>
<infohash>ecac802085ae8d49336e17c296c6748c27ea7f63</infohash>
<guid>https://academictorrents.com/details/ecac802085ae8d49336e17c296c6748c27ea7f63</guid>
<link>https://academictorrents.com/details/ecac802085ae8d49336e17c296c6748c27ea7f63</link>
<description>In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.</description>
<size>8547646</size>
</item><item>
<title>SLIC Superpixels Compared to State-of-the-Art Superpixel Methods</title>
<category>Paper</category>
<infohash>e23e155c685cffaa92a8966b94220c60f50ec80b</infohash>
<guid>https://academictorrents.com/details/e23e155c685cffaa92a8966b94220c60f50ec80b</guid>
<link>https://academictorrents.com/details/e23e155c685cffaa92a8966b94220c60f50ec80b</link>
<description>Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.</description>
<size>4248975</size>
</item><item>
<title>Contour Detection and Hierarchical Image Segmentation</title>
<category>Paper</category>
<infohash>01bef161fa5ded12680bd5cd755a70af569108ad</infohash>
<guid>https://academictorrents.com/details/01bef161fa5ded12680bd5cd755a70af569108ad</guid>
<link>https://academictorrents.com/details/01bef161fa5ded12680bd5cd755a70af569108ad</link>
<description>This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.</description>
<size>5683773</size>
</item><item>
<title>Accurate, Dense, and Robust Multiview Stereopsis</title>
<category>Paper</category>
<infohash>257aa08c83e2580215773ddadc84daec6b59c9a6</infohash>
<guid>https://academictorrents.com/details/257aa08c83e2580215773ddadc84daec6b59c9a6</guid>
<link>https://academictorrents.com/details/257aa08c83e2580215773ddadc84daec6b59c9a6</link>
<description>This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and "crowded" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.</description>
<size>6406059</size>
</item><item>
<title>Evaluating Color Descriptors for Object and Scene Recognition</title>
<category>Paper</category>
<infohash>141ba732b5e889ff3c064fc092782c7b77bbb309</infohash>
<guid>https://academictorrents.com/details/141ba732b5e889ff3c064fc092782c7b77bbb309</guid>
<link>https://academictorrents.com/details/141ba732b5e889ff3c064fc092782c7b77bbb309</link>
<description>Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.</description>
<size>3300827</size>
</item><item>
<title>Object Detection with Discriminatively Trained Part-Based Models</title>
<category>Paper</category>
<infohash>79c7bfe2f564e96e250420e8cf06074323703c5e</infohash>
<guid>https://academictorrents.com/details/79c7bfe2f564e96e250420e8cf06074323703c5e</guid>
<link>https://academictorrents.com/details/79c7bfe2f564e96e250420e8cf06074323703c5e</link>
<description>We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI&amp;mdash;SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.</description>
<size>5766739</size>
</item><item>
<title>Lipid Rafts As a Membrane-Organizing Principle</title>
<category>Paper</category>
<infohash>2de658c4f60db640b65b28385bf51effbe428802</infohash>
<guid>https://academictorrents.com/details/2de658c4f60db640b65b28385bf51effbe428802</guid>
<link>https://academictorrents.com/details/2de658c4f60db640b65b28385bf51effbe428802</link>
<description>Cell membranes display a tremendous complexity of lipids and proteins designed to perform the functions cells require. To coordinate these functions, the membrane is able to laterally segregate its constituents. This capability is based on dynamic liquid-liquid immiscibility and underlies the raft concept of membrane subcompartmentalization. Lipid rafts are fluctuating nanoscale assemblies of sphingolipid, cholesterol, and proteins that can be stabilized to coalesce, forming platforms that function in membrane signaling and trafficking. Here we review the evidence for how this principle combines the potential for sphingolipid-cholesterol self-assembly with protein specificity to selectively focus membrane bioactivity.</description>
<size>694810</size>
</item><item>
<title>Effectiveness and Safety of Tenofovir Gel, an Antiretroviral Microbicide, for the Prevention of HIV Infection in Women</title>
<category>Paper</category>
<infohash>fbc36d04e1e08af271246388ad643055d11d75c1</infohash>
<guid>https://academictorrents.com/details/fbc36d04e1e08af271246388ad643055d11d75c1</guid>
<link>https://academictorrents.com/details/fbc36d04e1e08af271246388ad643055d11d75c1</link>
<description>The Centre for the AIDS Program of Research in South Africa (CAPRISA) 004 trial assessed the effectiveness and safety of a 1% vaginal gel formulation of tenofovir, a nucleotide reverse transcriptase inhibitor, for the prevention of HIV acquisition in women. A double-blind, randomized controlled trial was conducted comparing tenofovir gel (n = 445 women) with placebo gel (n = 444 women) in sexually active, HIV-uninfected 18- to 40-year-old women in urban and rural KwaZulu-Natal, South Africa. HIV serostatus, safety, sexual behavior, and gel and condom use were assessed at monthly follow-up visits for 30 months. HIV incidence in the tenofovir gel arm was 5.6 per 100 women-years (person time of study observation) (38 out of 680.6 women-years) compared with 9.1 per 100 women-years (60 out of 660.7 women-years) in the placebo gel arm (incidence rate ratio = 0.61; P = 0.017). In high adherers (gel adherence &gt; 80%), HIV incidence was 54% lower (P = 0.025) in the tenofovir gel arm. In intermediate adherers (gel adherence 50 to 80%) and low adherers (gel adherence &lt; 50%), the HIV incidence reduction was 38 and 28%, respectively. Tenofovir gel reduced HIV acquisition by an estimated 39% overall, and by 54% in women with high gel adherence. No increase in the overall adverse event rates was observed. There were no changes in viral load and no tenofovir resistance in HIV seroconverters. Tenofovir gel could potentially fill an important HIV prevention gap, especially for women unable to successfully negotiate mutual monogamy or condom use.</description>
<size>836682</size>
</item><item>
<title>Food Security: The Challenge of Feeding 9 Billion People</title>
<category>Paper</category>
<infohash>3a54c0b98d9ca3ae92f1991bc3847c62be0ea8fc</infohash>
<guid>https://academictorrents.com/details/3a54c0b98d9ca3ae92f1991bc3847c62be0ea8fc</guid>
<link>https://academictorrents.com/details/3a54c0b98d9ca3ae92f1991bc3847c62be0ea8fc</link>
<description>Continuing population and consumption growth will mean that the global demand for food will increase for at least another 40 years. Growing competition for land, water, and energy, in addition to the overexploitation of fisheries, will affect our ability to produce food, as will the urgent requirement to reduce the impact of the food system on the environment. The effects of climate change are a further threat. But the world can produce more food and can ensure that it is used more efficiently and equitably. A multifaceted and linked global strategy is needed to ensure sustainable and equitable food security, different components of which are explored here.</description>
<size>425076</size>
</item><item>
<title>Porphyrin-Sensitized Solar Cells with Cobalt (II/III)–Based Redox Electrolyte Exceed 12 Percent Efficiency</title>
<category>Paper</category>
<infohash>b92bb1cca69f31daafd2c344e969d070890afbac</infohash>
<guid>https://academictorrents.com/details/b92bb1cca69f31daafd2c344e969d070890afbac</guid>
<link>https://academictorrents.com/details/b92bb1cca69f31daafd2c344e969d070890afbac</link>
<description>The iodide/triiodide redox shuttle has limited the efficiencies accessible in dye-sensitized solar cells. Here, we report mesoscopic solar cells that incorporate a Co(II/III)tris(bipyridyl)–based redox electrolyte in conjunction with a custom synthesized donor-π-bridge-acceptor zinc porphyrin dye as sensitizer (designated YD2-o-C8). The specific molecular design of YD2-o-C8 greatly retards the rate of interfacial back electron transfer from the conduction band of the nanocrystalline titanium dioxide film to the oxidized cobalt mediator, which enables attainment of strikingly high photovoltages approaching 1 volt. Because the YD2-o-C8 porphyrin harvests sunlight across the visible spectrum, large photocurrents are generated. Cosensitization of YD2-o-C8 with another organic dye further enhances the performance of the device, leading to a measured power conversion efficiency of 12.3% under simulated air mass 1.5 global sunlight.</description>
<size>485576</size>
</item><item>
<title>SMS Spam Collection Data Set </title>
<category>Dataset</category>
<infohash>25932ba42d983dd7b4474d8f59ab56cdc25d9107</infohash>
<guid>https://academictorrents.com/details/25932ba42d983dd7b4474d8f59ab56cdc25d9107</guid>
<link>https://academictorrents.com/details/25932ba42d983dd7b4474d8f59ab56cdc25d9107</link>
<description>==Data Set Information: This corpus has been collected from free or free for research sources at the Internet: -&gt; A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is: [Web Link]. -&gt; A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at: [Web Link]. -&gt; A list of 450 SMS ham messages collected from Caroline Tag s PhD Thesis available at [Web Link]. -&gt; Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at: [Web Link]. This corpus has been used in the following academic researches:</description>
<size>695379</size>
</item><item>
<title>Tiny Images Dataset</title>
<category>Dataset</category>
<infohash>03b779ffefa8efc30c2153f3330bb495bdc3e034</infohash>
<guid>https://academictorrents.com/details/03b779ffefa8efc30c2153f3330bb495bdc3e034</guid>
<link>https://academictorrents.com/details/03b779ffefa8efc30c2153f3330bb495bdc3e034</link>
<description>https://i.imgur.com/gWxLPJm.jpg ## Overview This page has links for downloading the Tiny Images dataset, which consists of 79,302,017 images, each being a 32x32 color image. This data is stored in the form of large binary files which can be accessed by a Matlab toolbox that we have written. You will need around 400Gb of free disk space to store all the files. In total there are 5 files that need to be downloaded, 3 of which are large binary files consisting of (i) the images themselves; (ii) their associated metadata (filename, search engine used, ranking etc.); (iii) Gist descriptors for each image. The other two files are the Matlab toolbox and index data file that together let you easily load in data from the binaries. ## Downloads Note that these files are very large and will take a considerable time to download. Please ensure you have sufficient disk space before commencing the download. 1. Image binary (227Gb) 2. Metadata binary (57Gb) 3. Gist binary (114Gb) 4. Index data (7Mb) 5. Matlab Tiny Images toolbox (150Kb) ## Overview The 79 million images are stored in one giant binary file, 227Gb in size. The metadata accompanying each image is also in a single giant file, 57Gb in size. To read images/metadata from these files, we have provided some Matlab wrapper functions. There are two versions of the functions for reading image data: * (i) loadTinyImages.m - plain Matlab function (no MEX), runs under 32/64bits. Loads images in by image number. Use this by default. * (ii) read_tiny_big_binary.m - Matlab wrapper for 64-bit MEX function. A bit faster and more flexible than (i), but requires a 64-bit machine. There are two types of annotation data: * (i) Manual annotation data, sorted in annotations.txt, that holds the label of images manually inspected to see if image content agrees with noun used to collect it. Some other information, such as search engine, is also stored. This data is available for only a very small portion of images. * (ii) Automatic annotation data, stored in tiny_metadata.bin, consisting of information relating the gathering of the image, e.g. search engine, which page, url to thumbnail etc. This data is available for all 79 million images. ## Requirements 1. Around 300Gb of disk space. 2. If you want to use the MEX versions of the code for reading in the data, you will need a 64-bit machine. But for most purposes, the Matlab implementation (loadTinyImages.m), which can use either 32 or 64bits will work perfectly well. To discover if you have a 32/64bit machine, type  uname -a  in an xterm (if using linux). ## Files The .tgz file should contain 10 files 1. loadTinyImages.m &amp;mdash; read tiny image data, pure Matlab version. 2. loadGroundTruth.m &amp;mdash; read annotations.txt file holding manual annotations 3. read_tiny_big_binary.m &amp;mdash; read tiny image data, 64-bit Matlab/MEX version 4. read_tiny_big_metadata.m &amp;mdash; read tiny image metadata, 64-bit Matlab/MEX version 5. read_tiny_gist_binary.m &amp;mdash; read tiny Gist, 64-bit Matlab/MEX version 6. read_tiny_binary_big_core.c &amp;mdash; 64-bit MEX source code for image reading 7. read_tiny_metadata_big_core.c &amp;mdash; 64-bit MEX source code for metadata reading 8. read_tiny_binary_gist_core.c &amp;mdash; 64-bit MEX source code for gist reading 9. compute_hash_function.m &amp;mdash; utility function to do fast string searching as used by read_tiny_big_binary.m and read_tiny_big_metadata.m 10. fast_str2num.m &amp;mdash; utility function for &amp;mdash; &amp;mdash; read_tiny_big_metadata.m 11. annotations.txt &amp;mdash; text file holding list of annotated images 12. README.txt &amp;mdash; this file Separately, you should have downloaded the following files 1. tiny_images.bin - 227Gb file holding 79,302,017 images 2. tiny_metadata.bin - 57Gb file holding metadata for all 79,302,017 images 3. tinygist80million.bin - 114Gb file holding 384-dim Gist descriptors for all 79,302,017 images 4. tiny_index.mat - 7Mb file holding index info, including: word - cell array of all 75,846 nouns for which we have images in tiny_images.bin num_imgs - vector with #images per noun for all 75,846 nouns ## Preliminaries Before the functions can be used you must do two things: 1. Set the absolute paths to the binary files in the Matlab functions. There are a total of 7 lines that must be set: *(i) loadTinyImages.m, line 14 &amp;mdash; set path to tiny_images.bin file *(ii) read_tiny_big_binary.m, line 40 &amp;mdash; set path to tiny_images.bin file *(iii) read_tiny_big_binary.m, line 42 &amp;mdash; set path to tiny_index.mat file *(iv) read_tiny_big_metadata.m, line 63 &amp;mdash; set path to tiny_metadata.bin file *(v) read_tiny_big_metadata.m, line 65 &amp;mdash; set path to tiny_index.mat file *(vi) read_tiny_gist_binary.m, line 36 &amp;mdash; set path to tiny_index.mat file *(vii) read_tiny_gist_binary.m, line 38 &amp;mdash; set path to tiny_metadata.bin file 2. If using the MEX versions, they must be compiled with the commands: *(i) mex read_tiny_binary_big_core.c *(ii) mex read_tiny_metadata_big_core.c *(iii) mex read_tiny_binary_gist_core.c ## Usage Here are some examples of the scripts in use. Please look at the comments at the top of each file for more extensive explanations. ## loadTinyImages.m</description>
<size>426334966808</size>
</item><item>
<title>Labeled Faces in the Wild aligned (LFW-a)</title>
<category>Dataset</category>
<infohash>403e6d6945a64dd1b9e185a6cd8d029274efccdc</infohash>
<guid>https://academictorrents.com/details/403e6d6945a64dd1b9e185a6cd8d029274efccdc</guid>
<link>https://academictorrents.com/details/403e6d6945a64dd1b9e185a6cd8d029274efccdc</link>
<description>The "Labeled Faces in the Wild-a" image collection is a database of labeled, face images intended for studying Face Recognition in unconstrained images. It contains the same images available in the original Labeled Faces in the Wild data set, however, here we provide them after alignment using a commercial face alignment software. Some of our results, published in [1,2,3], were produced using these images. We show this alignment to improve the performance of face recognition algorithms. More information on how these images were aligned may be found in the two papers. We have maintained the same directory structure as in the original LFW data set, and so these images can be used as direct substitutes for those in the original image set. Note, however, that the images available here are grayscale versions of the originals. Citation: If you find these images useful and use them in your work, please follow these guidlines: Comply with any instructions specified for the original LFW data set Cite one (or all) of the papers [1,2,3] below References: [1] Lior Wolf, Tal Hassner, and Yaniv Taigman, Effective Face Recognition by Combining Multiple Descriptors and Learned Background Statistics, IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 33(10), Oct. 2011 (PDF) [2] Lior Wolf, Tal Hassner and Yaniv Taigman, Similarity Scores based on Background Samples, Asian Conference on Computer Vision (ACCV), Xi  an, Sept 2009 (PDF) [3] Yaniv Taigman, Lior Wolf and Tal Hassner, Multiple One-Shots for Utilizing Class Label Information, The British Machine Vision Conference (BMVC), London, Sept 2009 (project, PDF)</description>
<size>96770694</size>
</item><item>
<title>Labeled Faces in the Wild - aligned with funneling</title>
<category>Dataset</category>
<infohash>073ecac13cf175ddc5617c1f2897b15ca7accd59</infohash>
<guid>https://academictorrents.com/details/073ecac13cf175ddc5617c1f2897b15ca7accd59</guid>
<link>https://academictorrents.com/details/073ecac13cf175ddc5617c1f2897b15ca7accd59</link>
<description>Data from the paper: "Unsupervised Joint Alignment of Complex Images" Gary B. Huang and Vidit Jain and Erik Learned-Miller ICCV 2007 Welcome to Labeled Faces in the Wild, a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web. Each face has been labeled with the name of the person pictured. 1680 of the people pictured have two or more distinct photos in the data set. The only constraint on these faces is that they were detected by the Viola-Jones face detector. More details can be found in the technical report below. Information: 13233 images 5749 people 1680 people with two or more images</description>
<size>243346528</size>
</item><item>
<title>Labeled Faces in the Wild - aligned with deep funneling</title>
<category>Dataset</category>
<infohash>692d556e6f2fcb430adeffc464eb8b0a6da58f65</infohash>
<guid>https://academictorrents.com/details/692d556e6f2fcb430adeffc464eb8b0a6da58f65</guid>
<link>https://academictorrents.com/details/692d556e6f2fcb430adeffc464eb8b0a6da58f65</link>
<description>Data from the paper: Learning to Align from Scratch Gary B. Huang and Marwan Mattar and Honglak Lee and Erik Learned-Miller NIPS 2012 Welcome to Labeled Faces in the Wild, a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web. Each face has been labeled with the name of the person pictured. 1680 of the people pictured have two or more distinct photos in the data set. The only constraint on these faces is that they were detected by the Viola-Jones face detector. More details can be found in the technical report below. Information: 13233 images 5749 people 1680 people with two or more images</description>
<size>108761145</size>
</item><item>
<title>Labeled Faces in the Wild</title>
<category>Dataset</category>
<infohash>9547ef95bc7007685afe52a8ec940aa61530bc99</infohash>
<guid>https://academictorrents.com/details/9547ef95bc7007685afe52a8ec940aa61530bc99</guid>
<link>https://academictorrents.com/details/9547ef95bc7007685afe52a8ec940aa61530bc99</link>
<description>Welcome to Labeled Faces in the Wild, a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web. Each face has been labeled with the name of the person pictured. 1680 of the people pictured have two or more distinct photos in the data set. The only constraint on these faces is that they were detected by the Viola-Jones face detector. More details can be found in the technical report below. Information: 13233 images 5749 people 1680 people with two or more images Citation: Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments Gary B. Huang and Manu Ramesh and Tamara Berg and Erik Learned-Miller University of Massachusetts, Amherst - 2007</description>
<size>180566744</size>
</item><item>
<title>The Street View House Numbers (SVHN) Dataset</title>
<category>Dataset</category>
<infohash>6f4caf3c24803d114c3cae3ab9cb946cd23c7213</infohash>
<guid>https://academictorrents.com/details/6f4caf3c24803d114c3cae3ab9cb946cd23c7213</guid>
<link>https://academictorrents.com/details/6f4caf3c24803d114c3cae3ab9cb946cd23c7213</link>
<description>SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. Overview 10 classes, 1 for each digit. Digit  1  has label 1,  9  has label 9 and  0  has label 10. 73257 digits for training, 26032 digits for testing, and 531131 additional, somewhat less difficult samples, to use as extra training data Comes in two formats: 1. Original images with character level bounding boxes. 2. MNIST-like 32-by-32 images centered around a single character (many of the images do contain some distractors at the sides). These are the original, variable-resolution, color house-number images with character level bounding boxes, as shown in the examples images above. (The blue bounding boxes here are just for illustration purposes. The bounding box information are stored in digitStruct.mat instead of drawn directly on the images in the dataset.) Each tar.gz file contains the orignal images in png format, together with a digitStruct.mat file, which can be loaded using Matlab. The digitStruct.mat file contains a struct called digitStruct with the same length as the number of original images. Each element in digitStruct has the following fields: name which is a string containing the filename of the corresponding image. bbox which is a struct array that contains the position, size and label of each digit bounding box in the image. Eg: digitStruct(300).bbox(2).height gives height of the 2nd digit bounding box in the 300th image. Reference Please cite the following reference in papers using this dataset: Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011. Please use http://ufldl.stanford.edu/housenumbers as the URL for this site when necessary For questions regarding the dataset, please contact streetviewhousenumbers@gmail.com</description>
<size>2636187279</size>
</item><item>
<title>THE NORB DATASET, V1.0</title>
<category>Dataset</category>
<infohash>4bcab1393bc699a39cb5c409e1f957d6c2918732</infohash>
<guid>https://academictorrents.com/details/4bcab1393bc699a39cb5c409e1f957d6c2918732</guid>
<link>https://academictorrents.com/details/4bcab1393bc699a39cb5c409e1f957d6c2918732</link>
<description>This database is intended for experiments in 3D object reocgnition from shape. It contains images of 50 toys belonging to 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. The objects were imaged by two cameras under 6 lighting conditions, 9 elevations (30 to 70 degrees every 5 degrees), and 18 azimuths (0 to 340 every 20 degrees). The training set is composed of 5 instances of each category (instances 4, 6, 7, 8 and 9), and the test set of the remaining 5 instances (instances 0, 1, 2, 3, and 5). CONTENT The files are gzipped for download purpose. After uncompressed, they are in a simple binary matrix format, with file postfix ".mat". The file format is explained in a later section. The "-dat" files store the image sequences. The "-cat" files store the corresponding category of the images. Each "-dat" file stores 29,160 image pairs (6 categories, 5 instances, 6 lightings, 9 elevations, and 18 azimuths). The 6-th category is for images without objects, which can be used to train a system to reject images as none of the 5 object categories. Each corresponding "-cat" file contains 29,160 category labels (0 for animal, 1 for human, 2 for plane, 3 for truck, 4 for car, 5 for blank). Each "-info" file stores 29,160 10-dimensional vectors, which contain additional information about the corresponding images. The first 4 elements in the vector are: - 1. the instance in the category (0 to 9) - 2. the elevation (0 to 8, which mean cameras are 30, 35,40,45,50,55,60,65,70 degrees from the horizontal respectively) - 3. the azimuth (0,2,4,...,34, multiply by 10 to get the azimuth in degrees) - 4. the lighting condition (0 to 5) and the next 6 elements describe the peturbations added to the object when superposed onto a cluttered background. (see next section) For regular training and testing, "-dat" and "-cat" files are sufficient. "-info" files are provided in case some other forms of classification or preprocessing are needed. JITTERED OBJECTS AND CLUTTERED BACKGROUND After capturing, each image has been processed so that the object is centered in the image (the center of mass of object pixels are in the center of the image), scaled so that the bounding box is roughly 80x80 pixels, and placed on a uniform background, including the cast shadow. And then 3 sources of variations are added to the data set: - the objects are peturbed - the objects are superposed onto complex background - distractor objects are added to the background The objects are randomly peturbed in 5 ways. They are scaled by factors between 0.78 to 1.0; in-plane rotated -5 to +5 degrees; and shifted -6 to +6 pixels horizontally and vertically. The image intensities (in the range of 0 to 255) are a random value between -20 to +20; image contrasts are scaled in the range of 0.8 to 1.3. The peturbations are stored in the last 6 elements in the "-info" files: - 5. horizontal shift (-6 to +6) - 6. vertical shift (-6 to +6) - 7. lumination change (-20 to +20) - 8. contrast (0.8 to 1.3) - 9. object scale (0.78 to 1.0) - 10. rotation (-5 to +5 degrees) The complex background images are extracted from a subset of natural scene images from Corel image library. The images contain scenes with large region contrasts such as lake against moutain, and irregular region boundaries. One distractor object is added to each image. The distractor is located toward the boundary of the image, but can clutter the main object in the center. There are images with only background and distractor objects. These images belong to their own category, as indicated in the category files. FILE FORMAT The files are stored in the so-called "binary matrix" file format, which is a simple format for vectors and multidimensional matrices of various element types. Binary matrix files begin with a file header which describes the type and size of the matrix, and then comes the binary image of the matrix. The header is best described by a C structure: struct header  int magic; // 4 bytes int ndim; // 4 bytes, little endian int dim[3]; ; Note that when the matrix has less than 3 dimensions, say, it s a 1D vector, then dim[1] and dim[2] are both 1. When the matrix has more than 3 dimensions, the header will be followed by further dimension size information. Otherwise, after the file header comes the matrix data, which is stored with the index in the last dimension changes the fastest. The magic number encodes the element type of the matrix: - 0x1E3D4C51 for a single precision matrix - 0x1E3D4C52 for a packed matrix - 0x1E3D4C53 for a double precision matrix - 0x1E3D4C54 for an integer matrix - 0x1E3D4C55 for a byte matrix - 0x1E3D4C56 for a short matrix Since the files are generated on an Intel machine, they use the little-endian scheme to encode the 4-byte integers. Pay attention when you read the files on machines that use big-endian. - The "-dat" files store a 4D tensor of dimensions 29160x2x108x108. - The "-cat" files store a 1D vector of dimension 29,160. - The "-info" files store a 2D matrix of dimensions 29160x10. Here s a piece of Matlab code to show how to read some example files. (to avoid the endian confusion, we read bytes of the header):</description>
<size>6186999662</size>
</item><item>
<title>PASCAL-S - The Secrets of Salient Object Segmentation Dataset</title>
<category>Dataset</category>
<infohash>6c49defd6f0e417c039637475cde638d1363037e</infohash>
<guid>https://academictorrents.com/details/6c49defd6f0e417c039637475cde638d1363037e</guid>
<link>https://academictorrents.com/details/6c49defd6f0e417c039637475cde638d1363037e</link>
<description>Free-fiewing fixations on a subset of 850 images from PASCAL VOC.  Collected on 8 subjects, 3s viewing time, Eyelink II eye tracker. The performance of most algorithms suggest that PASCAL-S is less biased than most of the saliency datasets. 850 IMAGES FROM PASCAL 2010 1296 OBJECT INSTANCES 12 SUBJECTS     Folders in archive: algmaps/ algmaps/pascal algmaps/pascal/mcg_gbvs algmaps/pascal/humanFix algmaps/pascal/gc algmaps/pascal/dva algmaps/pascal/ft algmaps/pascal/sig algmaps/pascal/aim algmaps/pascal/pcas algmaps/pascal/gbvs algmaps/pascal/sun algmaps/pascal/aws algmaps/pascal/sf algmaps/pascal/itti algmaps/bruce algmaps/bruce/dva algmaps/bruce/sig algmaps/bruce/aim algmaps/bruce/gbvs algmaps/bruce/sun algmaps/bruce/aws algmaps/bruce/itti algmaps/cerf algmaps/cerf/dva algmaps/cerf/sig algmaps/cerf/aim algmaps/cerf/gbvs algmaps/cerf/sun algmaps/cerf/aws algmaps/cerf/itti algmaps/imgsal algmaps/imgsal/humanFix algmaps/imgsal/gc algmaps/imgsal/cpmc_gbvs algmaps/imgsal/dva algmaps/imgsal/ft algmaps/imgsal/sig algmaps/imgsal/aim algmaps/imgsal/pcas algmaps/imgsal/gbvs algmaps/imgsal/sun algmaps/imgsal/aws algmaps/imgsal/sf algmaps/imgsal/itti algmaps/ft algmaps/ft/gc algmaps/ft/cpmc_gbvs algmaps/ft/dva algmaps/ft/ft algmaps/ft/sig algmaps/ft/aim algmaps/ft/pcas algmaps/ft/gbvs algmaps/ft/sun algmaps/ft/aws algmaps/ft/sf algmaps/ft/itti algmaps/judd algmaps/judd/dva algmaps/judd/sig algmaps/judd/aim algmaps/judd/gbvs algmaps/judd/sun algmaps/judd/aws algmaps/judd/itti</description>
<size>1144215855</size>
</item><item>
<title>PASCAL-Context Dataset</title>
<category>Dataset</category>
<infohash>eec6177ad62f4c47086e4cbec93ac4c08857ddbe</infohash>
<guid>https://academictorrents.com/details/eec6177ad62f4c47086e4cbec93ac4c08857ddbe</guid>
<link>https://academictorrents.com/details/eec6177ad62f4c47086e4cbec93ac4c08857ddbe</link>
<description>This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL semantic segmentation task by providing annotations for the whole scene. The statistics section has a full list of 400+ labels. Every pixel has a unique class label. Instance information (i.e, different masks to separate different instances of the same class in the same image) are currently provided for the 20 PASCAL objects. Statistics Since the dataset is an annotation of PASCAL VOC 2010, it has the same statistics as those of the original dataset. Training and validation contains 10,103 images while testing contains 9,637 images. Usage Considerations The classes are not drawn from a fixed pool. Instead labelers were free to either select or type in what they believe to be the appropriate class and to determine what the appropriate object granularity is. We decided to merge/split some of the categories so the current number of categories is different from what we mentioned in the CVPR 2014 paper. When using this dataset it is important that you examine classes to ensure they match your intended use. For example, sand is often labeled independently despite also being considered ground. Those interested in ground may want to cluster sand and ground together along with other classes. Citation The Role of Context for Object Detection and Semantic Segmentation in the Wild Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, Alan Yuille IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 Acknowledgements We would like to acknowledge the support by Implementation of Technologies for Identification, Behavior, and Location of Human based on Sensor Network Fusion Program through the Korean Ministry of Trade, Industry and Energy (Grant Number: 10041629). We would also like to thank National Science Foundation for grant 1317376 (Visual Cortex on Silicon. NSF Expedition in Computing). We thank Viet Nguyen for coordinating and leading the efforts for cleaning up the annotations.</description>
<size>82931861</size>
</item><item>
<title>PASCAL-Part Dataset</title>
<category>Dataset</category>
<infohash>f86670296bff85bcdffea6c4fc2e791446f9fb5e</infohash>
<guid>https://academictorrents.com/details/f86670296bff85bcdffea6c4fc2e791446f9fb5e</guid>
<link>https://academictorrents.com/details/f86670296bff85bcdffea6c4fc2e791446f9fb5e</link>
<description>This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL object detection task by providing segmentation masks for each body part of the object. For categories that do not have a consistent set of parts (e.g., boat), we provide the silhouette annotation. Statistics Since the dataset is an annotation of the PASCAL VOC 2010, it has the same statistics as those of the original dataset. Training and validation contains 10,103 images while testing contains 9,637 images. Usage Considerations We provide segmentation masks for detailed body parts. One can merge several parts to get appropriate object part granularity for different tasks. For instance, "eyes", "ears", "nose", etc. can be merged into a single "head" part. Citation Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, Alan Yuille IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 Acknowledgements We thank Viet Nguyen for coordinating and leading the efforts for cleaning up the annotations. We would like to acknowledge the support by grants ARO 62250-CS and N00014-12-1-0883.</description>
<size>80872607</size>
</item><item>
<title>Columbia University Image Library (COIL-20)</title>
<category>Dataset</category>
<infohash>1d16994c70b7fff8bfe917f83c397b1193daee7f</infohash>
<guid>https://academictorrents.com/details/1d16994c70b7fff8bfe917f83c397b1193daee7f</guid>
<link>https://academictorrents.com/details/1d16994c70b7fff8bfe917f83c397b1193daee7f</link>
<description>To database is available in two versions. The first, [unprocessed], consists of images for five of the objects that contain both the object and the background. The second, [processed], contains images for all of the objects in which the background has been discarded (and the images consist of the smallest square that contains the object). For formal documentation look at the corresponding compressed technical report "Columbia Object Image Library (COIL-20)," S. A. Nene, S. K. Nayar and H. Murase, Technical Report CUCS-005-96, February 1996.</description>
<size>19894476</size>
</item><item>
<title>Stanford STL-10 Image Dataset</title>
<category>Dataset</category>
<infohash>a799a2845ac29a66c07cf74e2a2838b6c5698a6a</infohash>
<guid>https://academictorrents.com/details/a799a2845ac29a66c07cf74e2a2838b6c5698a6a</guid>
<link>https://academictorrents.com/details/a799a2845ac29a66c07cf74e2a2838b6c5698a6a</link>
<description>![](https://cs.stanford.edu/~acoates/stl10/images.png) The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. We also expect that the higher resolution of this dataset (96x96) will make it a challenging benchmark for developing more scalable unsupervised learning methods. Overview 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. Images are 96x96 pixels, color. 500 training images (10 pre-defined folds), 800 test images per class. 100000 unlabeled images for unsupervised learning. These examples are extracted from a similar but broader distribution of images. For instance, it contains other types of animals (bears, rabbits, etc.) and vehicles (trains, buses, etc.) in addition to the ones in the labeled set. Images were acquired from labeled examples on ImageNet. Testing Protocol We recommend the following standardized testing protocol for reporting results: Perform unsupervised training on the unlabeled. Perform supervised training on the labeled data using 10 (pre-defined) folds of 100 examples from the training data. The indices of the examples to be used for each fold are provided. Report average accuracy on the full test set. Download Binary files, (Python code from Martin Tutek) The binary files are split into data and label files with suffixes: train_X.bin, train_y.bin, test_X.bin and test_y.bin. Within each, the values are stored as tightly packed arrays of uint8 s. The images are stored in column-major order, one channel at a time. That is, the first 96*96 values are the red channel, the next 96*96 are green, and the last are blue. The labels are in the range 1 to 10. The unlabeled dataset, unlabeled.bin, is in the same format, but there is no "_y.bin" file. A class_names.txt file is included for reference, with one class name per line. The file fold_indices.txt contains the (zero-based) indices of the examples to be used for each training fold. The first line contains the indices for the first fold, the second line, the second fold, and so on. Thanks to Martin Tutek for the python code to load/view STL-10! Python code Reference * Please cite the following reference in papers using this dataset: Adam Coates, Honglak Lee, Andrew Y. Ng An Analysis of Single Layer Networks in Unsupervised Feature Learning AISTATS, 2011.</description>
<size>2640397119</size>
</item><item>
<title>Boston Hubway Data Visualization Challenge Dataset</title>
<category>Dataset</category>
<infohash>3e395a74e333156daddcd67d614415fc9e237340</infohash>
<guid>https://academictorrents.com/details/3e395a74e333156daddcd67d614415fc9e237340</guid>
<link>https://academictorrents.com/details/3e395a74e333156daddcd67d614415fc9e237340</link>
<description>The Hubway trip history data includes every trip taken through Nov 2013 ? with date, time, origin and destination stations, plus the bike number and more. Data from 2011/07 through 2013/11 The Hubway trip history data Every time a Hubway user checks a bike out from a station, the system records basic information about the trip. Those anonymous data points have been exported into the spreadsheet. Please note, all private data including member names have been removed from these files. What can the data tell us? The CSV file contains data for every Hubway trip from the system launch on July 28th, 2011, through the end of September, 2012. The file contains the data points listed below for each trip. We ve also posed some of the questions you could answer with this dataset - we re sure you.ll have lots more of your own. Duration - Duration of trip. What s the average trip duration for annual members vs. casual users? Start date - Includes start date and time. What are the peak Hubway hours? End date - Includes end date and time. Which days of the week get the most Hubway traffic? Start station - Includes starting station name and number. Which stations are most popular? Which stations make up the most popular origin/destination pairs? End station - Includes ending station name and number. Which stations are the most asymmetric - more trips start there than end there, or vice versa? Are they all at the top of hills? Bike Nr - Includes ID number of bike used for the trip. What does a year in the life of one Hubway bike look like? Member Type - Lists whether user was an Annual or Casual (1 or 3 day) member. Which stations get the most tourist traffic, and which get the most commuters? Zip code - Lists the zip code for annual members only. How far does Hubway really reach? Which community should be the next to get Hubway stations? Birthdate - Lists the year in which annual members were born. Are all of the Hubway rentals at 2:00am by people under 25? Gender - Lists gender for annual members only. Are there different top stations for male vs. female Hubway members?</description>
<size>25999914</size>
</item><item>
<title>2013 MassDOT Visualizing Transportation Hackathon Dataset</title>
<category>Dataset</category>
<infohash>1938b67c7db77f878a56256e9958bb20801b9ddd</infohash>
<guid>https://academictorrents.com/details/1938b67c7db77f878a56256e9958bb20801b9ddd</guid>
<link>https://academictorrents.com/details/1938b67c7db77f878a56256e9958bb20801b9ddd</link>
<description>MassDOT Visualizing Transportation Hackathon, December 2013. Informing the Future of Massachusetts Transportation through Data Analysis and Visualization. Introduction At the MassDOT Visualizing Transportation Hackathon, the Massachusetts Department of Transportation (MassDOT), in partnership with the Mass Big Data Initiative, will release a series of related data sets on travel in Massachusetts and will open a challenge to the public to collaborate around analyzing this data and visualizing resulting insights to help inform the future of transportation in the Commonwealth. We invite participants to explore a collection of transportation data with a specific focus on travel behavior, road-rail comparisons, and the energy, environmental, and social impacts of transportation mode-choice. Background Each day in Massachusetts, travelers throughout the state make individual decisions on how to reach their destinations. Together, the public?s transportation ?mode choice? translates into significant outcomes which impact residents across the Commonwealth in many ways, including through traffic congestion, travel costs, and carbon emissions.  To better understand traffic flows throughout the state, MassDOT installed pilot network of roadway sensors to provide real-time traffic management (RTTM) information on three major roadways- 93, I-90, and Rt. 3. Each of these roadway ?corridors? is paralleled by at least one light rail line. Participants are encouraged to show compelling insights from the available data, with a special priority around presenting the differences between driving vs. riding the train through specific corridors. The event is designed to encourage participants to choose their own most compelling ?lens? through which to analyze the data, which may include: travel cost, emissions, time delays, etc. Data The event will draw upon several ?core? datasets covering travel behavior. This will provide a foundation from which comparisons can be made within the data and across additional, related regional datasets. Core datasets: Real-time Traffic Management Data Real-time Traffic Management (RTTM) data is collected by MassDOT via a pilot network of sensors that monitor traffic speed in three major roadway corridors (the ?three corridors?) in the Boston Metro area: I-90, 93, and Rt. 3. Sensors at regular intervals at the road level recognize and report the signals of bluetooth-enabled mobile devices in cars as they travel along roadways, calculating a the vehicle travel speed associated with travel between specific road-segments. No personally identifiable information is collected. Already successfully tested, this initial pilot network is already in the process of expanding statewide throughout 2014. The roadway speed data is then processed and made public in two ways: ? A map display of road speeds available online at the following link: http://www.massdot.state.ma.us/highway/TrafficTravelResources/TrafficInformationMaps.aspx ?  A real-time xml feed provides the same processed speed data (not the raw capture data) that feeds the map, and is available through a listed on the MassDOT developer?s page at the following link: http://www.massdot.state.ma.us/DevelopersData.aspx Roadway Volume Data Roadway Volume data provides information on the number of vehicles travelling along the corridors Highway Planned &amp; Unplanned Event Data Highway Planned &amp; Unplanned Event data covers scheduled and emergency roadwork and traffic accidents along major Massachusetts roadways. Commuter Rail Corridor Data Commuter Rail Corridor data includes arrival and departure times of trains, ridership ?load counts?, and fare data.</description>
<size>249289711</size>
</item><item>
<title>Epinions SNAP Social Network Data</title>
<category>Dataset</category>
<infohash>ba43f388cb372f72a91d7c08c54a3f8b36fe3505</infohash>
<guid>https://academictorrents.com/details/ba43f388cb372f72a91d7c08c54a3f8b36fe3505</guid>
<link>https://academictorrents.com/details/ba43f388cb372f72a91d7c08c54a3f8b36fe3505</link>
<description>This is a who-trust-whom online social network of a a general consumer review site Epinions.com. Members of the site can decide whether to   trust   each other. All the trust relationships interact and form the Web of Trust which is then combined with review ratings to determine which reviews are shown to the user. Dataset statistics Nodes	75879 Edges	508837 Nodes in largest WCC	75877 (1.000) Edges in largest WCC	508836 (1.000) Nodes in largest SCC	32223 (0.425) Edges in largest SCC	443506 (0.872) Average clustering coefficient	0.1378 Number of triangles	1624481 Fraction of closed triangles	0.0229 Diameter (longest shortest path)	14 90-percentile effective diameter	5</description>
<size>1630437</size>
</item><item>
<title>Live Journal SNAP Network Data</title>
<category>Dataset</category>
<infohash>227d085132908313beb19e9d334bfbdce042a8f6</infohash>
<guid>https://academictorrents.com/details/227d085132908313beb19e9d334bfbdce042a8f6</guid>
<link>https://academictorrents.com/details/227d085132908313beb19e9d334bfbdce042a8f6</link>
<description>LiveJournal is a free on-line community with almost 10 million members; a significant fraction of these members are highly active. (For example, roughly 300,000 update their content in any given 24-hour period.) LiveJournal allows members to maintain journals, individual and group blogs, and it allows people to declare which other members are their friends they belong. Dataset statistics Nodes	4847571 Edges	68993773 Nodes in largest WCC	4843953 (0.999) Edges in largest WCC	68983820 (1.000) Nodes in largest SCC	3828682 (0.790) Edges in largest SCC	65825429 (0.954) Average clustering coefficient	0.2742 Number of triangles	285730264 Fraction of closed triangles	0.04266 Diameter (longest shortest path)	16 90-percentile effective diameter	6.5</description>
<size>259619239</size>
</item><item>
<title>Twitter SNAP Network Data</title>
<category>Dataset</category>
<infohash>276e1028b08decbf711f275a57901dbde88ca5ab</infohash>
<guid>https://academictorrents.com/details/276e1028b08decbf711f275a57901dbde88ca5ab</guid>
<link>https://academictorrents.com/details/276e1028b08decbf711f275a57901dbde88ca5ab</link>
<description>This dataset consists of  circles  (or  lists ) from Twitter. Twitter data was crawled from public sources. The dataset includes node features (profiles), circles, and ego networks. Data is also available from Facebook and Google+. ##Dataset statistics |Attribute| Value| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| |Nodes|	81306| |Edges|	1768149| |Nodes in largest WCC	|81306 (1.000)| |Edges in largest WCC|	1768149 (1.000)| |Nodes in largest SCC|	68413 (0.841)| |Edges in largest SCC	|1685163 (0.953)| |Average clustering coefficient|	0.5653| |Number of triangles|	13082506| |Fraction of closed triangles|	0.06415| |Diameter (longest shortest path)|	7| |90-percentile effective diameter|	4.5|</description>
<size>32962356</size>
</item><item>
<title>Google Plus SNAP Network Data</title>
<category>Dataset</category>
<infohash>cd595c024206ee0e10ffd607f4a3a19d37eaf83c</infohash>
<guid>https://academictorrents.com/details/cd595c024206ee0e10ffd607f4a3a19d37eaf83c</guid>
<link>https://academictorrents.com/details/cd595c024206ee0e10ffd607f4a3a19d37eaf83c</link>
<description>This dataset consists of  circles  from Google+. Google+ data was collected from users who had manually shared their circles using the  share circle  feature. The dataset includes node features (profiles), circles, and ego networks. Data is also available from Facebook and Twitter. Dataset statistics Nodes	107614 Edges	13673453 Nodes in largest WCC	107614 (1.000) Edges in largest WCC	13673453 (1.000) Nodes in largest SCC	69501 (0.646) Edges in largest SCC	9168660 (0.671) Average clustering coefficient	0.4901 Number of triangles	1073677742 Fraction of closed triangles	0.6552 Diameter (longest shortest path)	6 90-percentile effective diameter	3 Source (citation) J. McAuley and J. Leskovec. Learning to Discover Social Circles in Ego Networks. NIPS, 2012.</description>
<size>811541565</size>
</item><item>
<title>Facebook SNAP Network Data</title>
<category>Dataset</category>
<infohash>3efc53f35d49669b89039f2b4ec9de11ec1d73fd</infohash>
<guid>https://academictorrents.com/details/3efc53f35d49669b89039f2b4ec9de11ec1d73fd</guid>
<link>https://academictorrents.com/details/3efc53f35d49669b89039f2b4ec9de11ec1d73fd</link>
<description>This dataset consists of  circles  (or  friends lists ) from Facebook. Facebook data was collected from survey participants using this Facebook app. The dataset includes node features (profiles), circles, and ego networks.</description>
<size>951514</size>
</item><item>
<title>Ford Campus Vision and Laser dataset</title>
<category>Dataset</category>
<infohash>9aeefe49b754722eb5c051e77bacc5d75eca3ef2</infohash>
<guid>https://academictorrents.com/details/9aeefe49b754722eb5c051e77bacc5d75eca3ef2</guid>
<link>https://academictorrents.com/details/9aeefe49b754722eb5c051e77bacc5d75eca3ef2</link>
<description>We provide a dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during November-December 2009. The vehicle path trajectory in these datasets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and SLAM (Simultaneous Localization and Mapping) algorithms. The size of the dataset is huge (~100 GB) so make sure that you have sufficient bandwidth before downloading the dataset. Once downloaded you can unzip the dataset.tgz file to get the following files and folders under the main directory: Folders: SCANS, IMAGES, LCM, VELODYNE. Files: Timestamp.log, Pose-Applanix.log, Pose-Mtig.log, Gps.log, PARAM.mat Unzip the Code.tgz file. It should have two folders "C" and "MATLAB" containing the utility functions. The details about these files and folders can be found in the README file and in our IJRR data paper. Please cite the IJRR data paper when using this data set in your work. Gaurav Pandey, James R. McBride and Ryan M. Eustice, Ford campus vision and lidar data set. International Journal of Robotics Research, 30(13):1543-1552, 2011. Calibrated laser reflectivity values: This file contains the [64x256] matrix of calibrated laser reflectivity values. Each element c(j,a) of this matrix is the calibrated output when beam j observes reflectivity a. This calibrated reflectivity map of each laser of the velodyne laser scanner has been estimated by using the method given in the paper titled Unsupervised Calibration for Multi-beam Lasers by Levinson et al. The uncalibrated/observed reflectivity values are available in the LCM log files and in the pcap files of the dataset. Subset of Dataset 1 (6 GB): This contains a subset of the dataset 1. This has 200 scans starting from scan index 1000 to 1200 and the corresponding camera images. Dataset 1 (78 GB): This corresponds to a loop in the downtown Dearborn Michigan. Dataset 2 (119 GB): This corresponds to a loop inside the Ford campus in Dearborn Michigan. Files: Code.zip dataset-1-subset.tgz dataset-1.tar.gz dataset-2.tar.gz ijrr2011.pdf</description>
<size>216009646228</size>
</item><item>
<title>Facebook Names Dataset</title>
<category>Dataset</category>
<infohash>e54c73099d291605e7579b90838c2cd86a8e9575</infohash>
<guid>https://academictorrents.com/details/e54c73099d291605e7579b90838c2cd86a8e9575</guid>
<link>https://academictorrents.com/details/e54c73099d291605e7579b90838c2cd86a8e9575</link>
<description>171 million names (100 million unique) This torrent contains: The URL of every searchable Facebook user s profile The name of every searchable Facebook user, both unique and by count (perfect for post-processing, datamining, etc) Processed lists, including first names with count, last names with count, potential usernames with count, etc The programs I used to generate everything So, there you have it: lots of awesome data from Facebook. Now, I just have to find one more problem with Facebook so I can write "Revenge of the Facebook Snatchers" and complete the trilogy. Any suggestions? &gt;:-) Limitations So far, I have only indexed the searchable users, not their friends. Getting their friends will be significantly more data to process, and I don t have those capabilities right now. I d like to tackle that in the future, though, so if anybody has any bandwidth they d like to donate, all I need is an ssh account and Nmap installed. An additional limitation is that these are only users whose first characters are from the latin charset. I plan to add non-Latin names in future releases.</description>
<size>2991052604</size>
</item><item>
<title>2011 Harvard Computer Science E-76 Building Mobile Applications</title>
<category>Course</category>
<infohash>278219bcd867f1b0a65391308a866ce2e9732e89</infohash>
<guid>https://academictorrents.com/details/278219bcd867f1b0a65391308a866ce2e9732e89</guid>
<link>https://academictorrents.com/details/278219bcd867f1b0a65391308a866ce2e9732e89</link>
<description>Today s applications are increasingly mobile. Computers are no longer confined to desks and laps but instead live in our pockets and hands. This course teaches students how to build mobile apps for Android and iOS, two of today s most popular platforms, and how to deploy them in Android Market and the App Store. Students learn how to write native apps for Android using Eclipse and the Android SDK, how to write native apps for iPhones, iPod touches, and iPads using Xcode and the iOS SDK, and how to write web apps for both platforms.</description>
<size>11076957456</size>
</item><item>
<title>2011 Harvard CS50 Introduction to Computer Science I</title>
<category>Course</category>
<infohash>ee6f0ba62a717e7785abb5a98e45e2a08699f6c6</infohash>
<guid>https://academictorrents.com/details/ee6f0ba62a717e7785abb5a98e45e2a08699f6c6</guid>
<link>https://academictorrents.com/details/ee6f0ba62a717e7785abb5a98e45e2a08699f6c6</link>
<description>Introduction to the intellectual enterprises of computer science and the art of programming. This course teaches students how to think algorithmically and solve problems efficiently. Topics include abstraction, algorithms, encapsulation, data structures, databases, memory management, security, software development, virtualization, and websites. Languages include C, PHP, and JavaScript plus SQL, CSS, and HTML. Problem sets inspired by real-world domains of biology, cryptography, finance, forensics, and gaming. Designed for concentrators and non-concentrators alike, with or without prior programming experience.</description>
<size>20910489048</size>
</item><item>
<title>Georgia Tech face database</title>
<category>Dataset</category>
<infohash>0848b2c9b40e49041eff85ac4a2da71ae13a3e4f</infohash>
<guid>https://academictorrents.com/details/0848b2c9b40e49041eff85ac4a2da71ae13a3e4f</guid>
<link>https://academictorrents.com/details/0848b2c9b40e49041eff85ac4a2da71ae13a3e4f</link>
<description>Georgia Tech face database (128MB) contains images of 50 people taken in two or three sessions between 06/01/99 and 11/15/99 at the Center for Signal and Image Processing at Georgia Institute of Technology. All people in the database are represented by 15 color JPEG images with cluttered background taken at resolution 640x480 pixels. The average size of the faces in these images is 150x150 pixels. The pictures show frontal and/or tilted faces with different facial expressions, lighting conditions and scale. Each image is manually labeled to determine the position of the face in the image. The set of label files is available here. The Readme.txt file gives more details about the database.</description>
<size>133192489</size>
</item><item>
<title>rijksmuseum_data</title>
<category>Dataset</category>
<infohash>db3cd9defd6d3f16f0a0e6cd0ada882792b9f782</infohash>
<guid>https://academictorrents.com/details/db3cd9defd6d3f16f0a0e6cd0ada882792b9f782</guid>
<link>https://academictorrents.com/details/db3cd9defd6d3f16f0a0e6cd0ada882792b9f782</link>
<description>Images from the Rijksmuseum</description>
<size>160609890576</size>
</item><item>
<title>bm_eval.tar.gz</title>
<category>Dataset</category>
<infohash>00422309df7aea69c7475141fa5ec0dcf5862bc3</infohash>
<guid>https://academictorrents.com/details/00422309df7aea69c7475141fa5ec0dcf5862bc3</guid>
<link>https://academictorrents.com/details/00422309df7aea69c7475141fa5ec0dcf5862bc3</link>
<description>The British Museum s Linked Data and SPARQL service provides access to the same collection records available through the Museum s web presented Collection Online, but in a computer readable format. The use of the W3C open data standard, RDF, allows the Museum s collection data to join and relate to a growing body of linked data published by other organisations around the world interested in promoting accessibility and collaboration. The data has also been organised using the CIDOC CRM (Conceptual Reference Model) crucial for harmonising with other cultural heritage data. The CIDOC CRM represents British Museum s data completely and, unlike other standards that fit data into a common set of data fields, all of the meaning contained in the Museum s source data is retained. This represents the only way in which museums can represent all their specialist and localised knowledge, essential for serious research, but also support highly precise cross organisational data discovery and semantic relationships. This n-triple dump was published at 01-Aug-2014 at http://collection.britishmuseum.org/dumps</description>
<size>2181122921</size>
</item><item>
<title>ImageNet LSVRC 2012 Validation Set (Object Detection)</title>
<category>Dataset</category>
<infohash>5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5</infohash>
<guid>https://academictorrents.com/details/5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5</guid>
<link>https://academictorrents.com/details/5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5</link>
<description>See http://image-net.org/challenges/LSVRC/2012</description>
<size>6744924160</size>
</item><item>
<title>ImageNet LSVRC 2012 Training Set (Bounding Boxes)</title>
<category>Dataset</category>
<infohash>28202f4f8dde5c9b26d406f5522f8763713e605b</infohash>
<guid>https://academictorrents.com/details/28202f4f8dde5c9b26d406f5522f8763713e605b</guid>
<link>https://academictorrents.com/details/28202f4f8dde5c9b26d406f5522f8763713e605b</link>
<description>See http://image-net.org/challenges/LSVRC/2012</description>
<size>20861852</size>
</item><item>
<title>ImageNet LSVRC 2012 Validation Set (Bounding Boxes)</title>
<category>Dataset</category>
<infohash>dfa9ab2528ce76b907047aa8cf8fc792852facb9</infohash>
<guid>https://academictorrents.com/details/dfa9ab2528ce76b907047aa8cf8fc792852facb9</guid>
<link>https://academictorrents.com/details/dfa9ab2528ce76b907047aa8cf8fc792852facb9</link>
<description>See http://image-net.org/challenges/LSVRC/2012</description>
<size>2221290</size>
</item><item>
<title>ImageNet LSVRC 2012 Training Set (Object Detection)</title>
<category>Dataset</category>
<infohash>a306397ccf9c2ead27155983c254227c0fd938e2</infohash>
<guid>https://academictorrents.com/details/a306397ccf9c2ead27155983c254227c0fd938e2</guid>
<link>https://academictorrents.com/details/a306397ccf9c2ead27155983c254227c0fd938e2</link>
<description>See http://image-net.org/challenges/LSVRC/2012</description>
<size>147897477120</size>
</item><item>
<title>Imagenet Full (Fall 2011 release)</title>
<category>Dataset</category>
<infohash>564a77c1e1119da199ff32622a1609431b9f1c47</infohash>
<guid>https://academictorrents.com/details/564a77c1e1119da199ff32622a1609431b9f1c47</guid>
<link>https://academictorrents.com/details/564a77c1e1119da199ff32622a1609431b9f1c47</link>
<description>ImageNet is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy. For more information, see http://www.image-net.org/about-overview</description>
<size>1309848811520</size>
</item><item>
<title>ImageNet LSVRC 2013 Validation Set (Object Detection)</title>
<category>Dataset</category>
<infohash>f47c081054f6301d908b5840bed507b3d981e669</infohash>
<guid>https://academictorrents.com/details/f47c081054f6301d908b5840bed507b3d981e669</guid>
<link>https://academictorrents.com/details/f47c081054f6301d908b5840bed507b3d981e669</link>
<description>See http://image-net.org/challenges/LSVRC/2015/#det</description>
<size>2710476800</size>
</item><item>
<title>ImageNet LSVRC 2014 Training Set (Object Detection)</title>
<category>Dataset</category>
<infohash>fbc7a9f9a10be134a1738ba947efa1814ed3ce9b</infohash>
<guid>https://academictorrents.com/details/fbc7a9f9a10be134a1738ba947efa1814ed3ce9b</guid>
<link>https://academictorrents.com/details/fbc7a9f9a10be134a1738ba947efa1814ed3ce9b</link>
<description>See http://image-net.org/challenges/LSVRC/2015/#det</description>
<size>50121779200</size>
</item><item>
<title>DNS Census 2013 - dataset of registered domains and DNS records</title>
<category>Dataset</category>
<infohash>c89c9c891f7008e124e7382e605d04e3872e5541</infohash>
<guid>https://academictorrents.com/details/c89c9c891f7008e124e7382e605d04e3872e5541</guid>
<link>https://academictorrents.com/details/c89c9c891f7008e124e7382e605d04e3872e5541</link>
<description>==Introduction Probably the last time any person or entity had a complete list of all hostnames on the Internet was in the mid-1980s, when the Domain Name System (DNS) replaced the old, centralized DoD Internet Host table. Some domain registries like Verisign have zone file access programs, which offer a way to download a complete zone file. But many country-code Top Level Domains (ccTLDs) do not offer such programs. Various companies have, usually through crawling large parts of the web, collected a huge number of DNS records, but none have a complete list of all domains. And even those incomplete lists are in the hands of few companies like Google, Microsoft (Bing) and DomainTools LLC (whois.sc), presumably most major ISPs, and some research organizations like DNS-OARC, where they are treated as closely guarded company secrets. Other than the zone files provided by Verisign and some other registries, there are few datasets freely available for research or data mining. The DNS Census 2013 is an attempt to provide a public dataset of registered domains and DNS records. It was inspired by the Internet Census 2012 which showed that releasing data anonymously via BitTorrent is a good thing to do. The dataset contains about 2.5 billion DNS records gathered in the years 2012-2013.</description>
<size>15643996769</size>
</item><item>
<title>Outerra Earth Data</title>
<category>Dataset</category>
<infohash>f8daca10d620cb3a5d9554c79b8e640361e8696d</infohash>
<guid>https://academictorrents.com/details/f8daca10d620cb3a5d9554c79b8e640361e8696d</guid>
<link>https://academictorrents.com/details/f8daca10d620cb3a5d9554c79b8e640361e8696d</link>
<description>Note: to use Outerra Tech Demo without internet connection, run "outerra.exe -demo" from the command line. At the moment this only works with the demo mode, for full mode you have to have internet connection.</description>
<size>15516290509</size>
</item><item>
<title>NYU Depth Dataset V2</title>
<category>Dataset</category>
<infohash>2b48551a335bc5fb6a06f6e7a1ab5d32ec263d27</infohash>
<guid>https://academictorrents.com/details/2b48551a335bc5fb6a06f6e7a1ab5d32ec263d27</guid>
<link>https://academictorrents.com/details/2b48551a335bc5fb6a06f6e7a1ab5d32ec263d27</link>
<description>The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect 1449 densely labeled pairs of aligned RGB and depth images 464 new scenes taken from 3 cities 407,024 new unlabeled frames Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc) The dataset has several components: Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels. Raw: The raw rgb, depth and accelerometer data as provided by the Kinect. Toolbox: Useful functions for manipulating the data and labels. The raw dataset contains the raw image and accelerometer dumps from the kinect. The RGB and Depth camera sampling rate lies between 20 and 30 FPS (variable over time). While the frames are not synchronized, the timestamps for each of the RGB, depth and accelerometer files are included as part of each filename and can be synchronized to produce a continuous video using the get_synched_frames.m function in the Toolbox. The dataset is divided into different folders which correspond to each ’scene’ being filmed, such as ‘living_room_0012′ or ‘office_0014′. The file hierarchy is structured as follows: / ../bedroom_0001/ ../bedroom_0001/a-1294886363.011060-3164794231.dump ../bedroom_0001/a-1294886363.016801-3164794231.dump ... ../bedroom_0001/d-1294886362.665769-3143255701.pgm ../bedroom_0001/d-1294886362.793814-3151264321.pgm ... ../bedroom_0001/r-1294886362.238178-3118787619.ppm ../bedroom_0001/r-1294886362.814111-3152792506.ppm Files that begin with the prefix a- are the accelerometer dumps. These dumps are written to disk in binary and can be read with file get_accel_data.mex. Files that begin with the prefix r- and d- are the frames from the RGB and depth cameras, respectively. Since no preprocessing has been performed, the raw depth images must be projected onto the RGB coordinate space into order to align the images. Toolbox The matlab toolbox has several useful functions for handling the data. camera_params.m - Contains the camera parameters for the Kinect used to capture the data. crop_image.m – Crops an image to use only the area when the depth signal is projected. fill_depth_colorization.m – Fills in the depth using Levin et al s Colorization method. fill_depth_cross_bf.m - Fills in the depth using a cross-bilateral filter at multiple scales. get_accel_data.m - Returns the accelerometer parameters at a specific moment in time. get_instance_masks.m – Returns a set of binary masks, one for each object instance in an image. get_rgb_depth_overlay.m – Returns a visualization of the RGB and Depth alignment. get_synched_frames.m - Returns a set of synchronized RGB and Depth frames that can be used to produced RGBD videos of each scene. get_timestamp_from_filename.m – Returns the timestamp from the raw dataset filenames. This is useful for sampling the RAW video dumps at even intervals in time. project_depth_map.m – Projects the Depth map from the Kinect on the RGB image plane.</description>
<size>428798889146</size>
</item><item>
<title>The MacTeX 2015 Distribution</title>
<category>Dataset</category>
<infohash>fe3fda68caa7c6a3821121c9f9b35c581b7af913</infohash>
<guid>https://academictorrents.com/details/fe3fda68caa7c6a3821121c9f9b35c581b7af913</guid>
<link>https://academictorrents.com/details/fe3fda68caa7c6a3821121c9f9b35c581b7af913</link>
<description>TeX (= tau epsilon chi, and pronounced similar to "blecch", not to the state known for  Tex-Mex  chili) is a computer language designed for use in typesetting; in particular, for typesetting math and other technical (from greek "techne" = art/craft, the stem of  technology ) material. After downloading, move the file MacTeX.pkg to the desktop or another convenient spot, and double click it to install. MacTeX completely configures TeX, so after installation it is ready to use. Go to /Applications/TeX and read the short document "READ ME FIRST" to get started. This document explains how to use LaTeX to write and typeset a short document. The location /Applications/TeX also contains "What is installed", which lists all the components of MacTeX and their installation locations. MacTeX installs TeX Live, which contains TeX, LaTeX, AMS-TeX, and virtually every TeX-related style file and font. TeX Live is maintained by TeX User Groups across the world. TeX Live is compiled from the same sources for all platforms: Macintosh, Windows, Linux, Unix. MacTeX also installs Ghostscript, an open source version of Postscript, and it installs the GUI programs TeXShop, LaTeXiT, TeX Live Utility, and BibDesk.</description>
<size>2678205607</size>
</item><item>
<title>Million Song Dataset Subset</title>
<category>Dataset</category>
<infohash>e0b6b5ff012fcda7c4a14e4991d8848a6a2bf52b</infohash>
<guid>https://academictorrents.com/details/e0b6b5ff012fcda7c4a14e4991d8848a6a2bf52b</guid>
<link>https://academictorrents.com/details/e0b6b5ff012fcda7c4a14e4991d8848a6a2bf52b</link>
<description>To let you get a feel for the dataset without committing to a full download, we also provide a subset consisting of 10,000 songs (1%, 1.8 gb) selected at random. It contains "additional files" (SQLite databases) in the same format as those for the full set, but referring only to the 10K song subset. Therefore, you can develop code on the subset, then port it to the full dataset. The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks. Its purposes are: To encourage research on algorithms that scale to commercial sizes To provide a reference dataset for evaluating research As a shortcut alternative to creating a large dataset with APIs (e.g. The Echo Nest s) To help new researchers get started in the MIR field The core of the dataset is the feature analysis and metadata for one million songs, provided by The Echo Nest. The dataset does not include any audio, only the derived features. Note, however, that sample audio can be fetched from services like 7digital, using code we provide. Please cite the following paper: Thierry Bertin-Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The Million Song Dataset. In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), 2011.</description>
<size>1994614463</size>
</item><item>
<title>UCI Folio Leaf Dataset</title>
<category>Dataset</category>
<infohash>a6c64db1e42721f5d7e7aa2b118e293a0d0d335b</infohash>
<guid>https://academictorrents.com/details/a6c64db1e42721f5d7e7aa2b118e293a0d0d335b</guid>
<link>https://academictorrents.com/details/a6c64db1e42721f5d7e7aa2b118e293a0d0d335b</link>
<description>Source: The leaves were taken from plants in the farm of the University of Mauritius and nearby locations. Donors: Trishen Munisami trishen.munisami  @  gmail.com Mahess Ramsurn ramsurn.mahess  @  umail.uom.ac.mu Somveer Kishnah s.kishnah  @  uom.ac.mu Sameerchand Pudaruth sameerchand.pudaruth  @  gmail.com Data Set Information: - The leaves were placed on a white background and then photographed. - The pictures were taken in broad daylight to ensure optimum light intensity. Attribute Information: List of plant species: 1. Beaumier du perou 2. Eggplant 3. Fruitcitere 4. Guava 5. Hibiscus 6. Betel 7. Rose 8. Chrysanthemum 9. Ficus 10. Duranta gold 11. Ashanti blood 12. Bitter Orange 13. Coeur Demoiselle 14. Jackfruit 15. Mulberry Leaf 16. Pimento 17. Pomme Jacquot 18. Star Apple 19. Barbados Cherry 20. Sweet Olive 21. Croton 22. Thevetia 23. Vieux Garcon 24. Chocolate tree 25. Carricature plant 26. Coffee 27. Ketembilla 28. Chinese guava 29. Lychee 30. Geranium 31. Sweet potato 32. Papaya Relevant Papers: Munisami, T., Ramsurn, M., Kishnah, S. and Pudaruth, S., 2015. Plant leaf recognition using shape features and colour histogram with k-nearest neighbour classifiers. Procedia Computer Science (Elsevier) Journal. 58, pp. 740-747. Citation Request: Munisami, T., Ramsurn, M., Kishnah, S. and Pudaruth, S., 2015. Plant leaf recognition using shape features and colour histogram with k-nearest neighbour classifiers. Procedia Computer Science (Elsevier) Journal. 58, pp. 740-747.</description>
<size>972471245</size>
</item><item>
<title>Columbia Object Image Library (COIL-100)</title>
<category>Dataset</category>
<infohash>ce39e4554b2207c7764a58acf190dd3ccfa227e2</infohash>
<guid>https://academictorrents.com/details/ce39e4554b2207c7764a58acf190dd3ccfa227e2</guid>
<link>https://academictorrents.com/details/ce39e4554b2207c7764a58acf190dd3ccfa227e2</link>
<description>Columbia Object Image Library COIL is a database of color images of 100 objects. The objects were placed on a motorized turntable against a black background. The turntable was rotated through 360 degrees to vary object pose with respect to a fixed color camera. Images of the objects were taken at pose intervals of 5 degrees?. This corresponds to 72 poses per object. The images were size normalized. COIL-100 is available online via ftp. "Columbia Object Image Library (COIL-100)," S. A. Nene, S. K. Nayar and H. Murase, Technical Report CUCS-006-96, February 1996.</description>
<size>130688843</size>
</item><item>
<title>CIFAR-100 (Canadian Institute for Advanced Research)</title>
<category>Dataset</category>
<infohash>9adb30144cf53809ec0613fa869b0a65b4e81ff5</infohash>
<guid>https://academictorrents.com/details/9adb30144cf53809ec0613fa869b0a65b4e81ff5</guid>
<link>https://academictorrents.com/details/9adb30144cf53809ec0613fa869b0a65b4e81ff5</link>
<description>This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). Here is the list of classes in the CIFAR-100: Superclass	Classes aquatic mammals	beaver, dolphin, otter, seal, whale fish	aquarium fish, flatfish, ray, shark, trout flowers	orchids, poppies, roses, sunflowers, tulips food containers	bottles, bowls, cans, cups, plates fruit and vegetables	apples, mushrooms, oranges, pears, sweet peppers household electrical devices	clock, computer keyboard, lamp, telephone, television household furniture	bed, chair, couch, table, wardrobe insects	bee, beetle, butterfly, caterpillar, cockroach large carnivores	bear, leopard, lion, tiger, wolf large man-made outdoor things	bridge, castle, house, road, skyscraper large natural outdoor scenes	cloud, forest, mountain, plain, sea large omnivores and herbivores	camel, cattle, chimpanzee, elephant, kangaroo medium-sized mammals	fox, porcupine, possum, raccoon, skunk non-insect invertebrates	crab, lobster, snail, spider, worm people	baby, boy, girl, man, woman reptiles	crocodile, dinosaur, lizard, snake, turtle small mammals	hamster, mouse, rabbit, shrew, squirrel trees	maple, oak, palm, pine, willow vehicles 1	bicycle, bus, motorcycle, pickup truck, train vehicles 2	lawn-mower, rocket, streetcar, tank, tractor Yes, I know mushrooms aren t really fruit or vegetables and bears aren t really carnivores.</description>
<size>168513733</size>
</item><item>
<title>CIFAR-10 (Canadian Institute for Advanced Research)</title>
<category>Dataset</category>
<infohash>463ba7ec7f37ed414c12fbb71ebf6431eada2d7a</infohash>
<guid>https://academictorrents.com/details/463ba7ec7f37ed414c12fbb71ebf6431eada2d7a</guid>
<link>https://academictorrents.com/details/463ba7ec7f37ed414c12fbb71ebf6431eada2d7a</link>
<description>The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.</description>
<size>170052171</size>
</item><item>
<title>HIGGS Data Set </title>
<category>Dataset</category>
<infohash>bcad357813557c282527317a5a5cf593df8eb7a9</infohash>
<guid>https://academictorrents.com/details/bcad357813557c282527317a5a5cf593df8eb7a9</guid>
<link>https://academictorrents.com/details/bcad357813557c282527317a5a5cf593df8eb7a9</link>
<description>This is a classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not.</description>
<size>2816407858</size>
</item><item>
<title>Caltech256 Image Dataset</title>
<category>Dataset</category>
<infohash>7de9936b060525b6fa7f5d8aabd316637d677622</infohash>
<guid>https://academictorrents.com/details/7de9936b060525b6fa7f5d8aabd316637d677622</guid>
<link>https://academictorrents.com/details/7de9936b060525b6fa7f5d8aabd316637d677622</link>
<description>==Overview 256 Object Categories + Clutter At least 80 images per category 30608 images instead of 9144</description>
<size>1183006720</size>
</item><item>
<title>Caltech101 Image Dataset</title>
<category>Dataset</category>
<infohash>410206b2624ab243b0fa87058f73927fc44a5b7c</infohash>
<guid>https://academictorrents.com/details/410206b2624ab243b0fa87058f73927fc44a5b7c</guid>
<link>https://academictorrents.com/details/410206b2624ab243b0fa87058f73927fc44a5b7c</link>
<description>==Description Pictures of objects belonging to 101 categories. About 40 to 800 images per category. Most categories have about 50 images. Collected in September 2003 by Fei-Fei Li, Marco Andreetto, and Marc  Aurelio Ranzato.  The size of each image is roughly 300 x 200 pixels. We have carefully clicked outlines of each object in these pictures, these are included under the  Annotations.tar . There is also a matlab script to view the annotaitons,  show_annotations.m . How to use the dataset If you are using the Caltech 101 dataset for testing your recognition algorithm you should try and make your results comparable to the results of others. We suggest training and testing on fixed number of pictures and repeating the experiment with different random selections of pictures in order to obtain error bars. Popular number of training images: 1, 3, 5, 10, 15, 20, 30. Popular numbers of testing images: 20, 30. See also the discussion below. When you report your results please keep track of which images you used and which were misclassified. We will soon publish a more detailed experimental protocol that allows you to report those details. See the Discussion section for more details.</description>
<size>145768831</size>
</item><item>
<title>MS Common Objects in Context (COCO2014)</title>
<category>Dataset</category>
<infohash>f993c01f3c268b5d57219a38f8ec73ee7524421a</infohash>
<guid>https://academictorrents.com/details/f993c01f3c268b5d57219a38f8ec73ee7524421a</guid>
<link>https://academictorrents.com/details/f993c01f3c268b5d57219a38f8ec73ee7524421a</link>
<description>Microsoft COCO is a new image recognition, segmentation, and captioning dataset. Microsoft COCO has several features: Object segmentation Recognition in Context Multiple objects per image More than 300,000 images More than 2 Million instances 80 object categories 5 captions per image The 2014 Testing Images are for the MS COCO Captioning Challenge, while the 2015 Testing Images are for the MS COCO Detection Challenge. The train and val data are common to both challenges. Note also that as an alternative to downloading the large image zip files, individual images may be downloaded from the COCO website using the "coco_url" field specified in the image info struct.</description>
<size>26815885986</size>
</item><item>
<title>VQA: Visual Question Answering Dataset</title>
<category>Dataset</category>
<infohash>f075ad12eccbbd665aec68db5d208dc68e7a384f</infohash>
<guid>https://academictorrents.com/details/f075ad12eccbbd665aec68db5d208dc68e7a384f</guid>
<link>https://academictorrents.com/details/f075ad12eccbbd665aec68db5d208dc68e7a384f</link>
<description>254,721 images, 764,163 questions, 9,934,119 answers!</description>
<size>7984418554</size>
</item><item>
<title>MIT OCW 8.02 - Physics II -  Electricity and Magnetism</title>
<category>Course</category>
<infohash>7010163eae33fbac6c065095e908c2c49550b931</infohash>
<guid>https://academictorrents.com/details/7010163eae33fbac6c065095e908c2c49550b931</guid>
<link>https://academictorrents.com/details/7010163eae33fbac6c065095e908c2c49550b931</link>
<description>8.02 Classical Theory of Electromagnetism. In addition to the basic concepts of Electromagnetism, a vast variety of interesting topics are covered in this course: Lightning, Pacemakers, Electric Shock Treatment, Electrocardiograms, Metal Detectors, Musical Instruments, Magnetic Levitation, Bullet Trains, Electric Motors, Radios, TV, Car Coils, Superconductivity, Aurora Borealis, Rainbows, Radio Telescopes, Interferometers, Particle Accelerators (a.k.a. Atom Smashers or Colliders), Mass Spectrometers, Red Sunsets, Blue Skies, Haloes around Sun and Moon, Color Perception, Doppler Effect, Big-Bang Cosmology.</description>
<size>4085011949</size>
</item><item>
<title>(Partial) User Preference Similarity as Classification-Based Model Similarity</title>
<category>Paper</category>
<infohash>b54973c5ec9ccb8eba4454f2222d26da47c6826e</infohash>
<guid>https://academictorrents.com/details/b54973c5ec9ccb8eba4454f2222d26da47c6826e</guid>
<link>https://academictorrents.com/details/b54973c5ec9ccb8eba4454f2222d26da47c6826e</link>
<description>Recommender systems play an important role in helping people finding items they like. One type of recommender system is collaborative filtering that considers feedback of like-minded people. The fundamental assumption of collaborative filtering is that people who previously shared similar preferences behave similarly later on. This paper introduces several novel, classification-based similarity metrics that are used to compare user preferences. Furthermore, the concept of partial preference similarity based on a machine learning model is presented. For evaluation the cold-start behavior of the presented classification-based similarity metrics is evaluated in a large-scale experiment. It is shown that classification-based similarity metrics with machine learning significantly outperforms other similarity approaches in different cold-start situations under different degrees of data-sparseness.</description>
<size>1693102</size>
</item><item>
<title>Social Influence Analysis in Microblogging Platforms - A Topic-Sensitive based Approach</title>
<category>Paper</category>
<infohash>a0e52d081a79bb468ffef603974c313531141b9c</infohash>
<guid>https://academictorrents.com/details/a0e52d081a79bb468ffef603974c313531141b9c</guid>
<link>https://academictorrents.com/details/a0e52d081a79bb468ffef603974c313531141b9c</link>
<description>The use of Social Media, particularly microblogging platforms such as Twitter, has proven to be an effective channel for promoting ideas to online audiences. In a world where information can bias public opinion it is essential to analyse the propagation and influence of information in large-scale networks. Recent research studying social media data to rank users by topical relevance have largely focused on the “retweet", “following" and “mention" relations. In this paper we propose the user of semantic profiles for deriving influential users based on the retweet subgraph of the Twitter graph. We introduce a variation of the PageRank algorithm for analysing users topical and entity influence based on the topical/entity relevance of a retweet relation. Experimental results show that our approach outperforms related algorithms including HITS, InDegree and Topic-Sensitive PageRank. We also introduce VisInfluence, a visualisation platform for presenting top influential users based on a topical query need.</description>
<size>3066360</size>
</item><item>
<title>DBpedia - A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia</title>
<category>Paper</category>
<infohash>3f8565983702407a4c583ed7aba2fe959479bdbc</infohash>
<guid>https://academictorrents.com/details/3f8565983702407a4c583ed7aba2fe959479bdbc</guid>
<link>https://academictorrents.com/details/3f8565983702407a4c583ed7aba2fe959479bdbc</link>
<description>The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes regular releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and thus make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.</description>
<size>2152010</size>
</item><item>
<title>How Ontologies Benefit Enterprise Applications</title>
<category>Paper</category>
<infohash>6d266b7a39716a1bda3ba35241334abbf554e3ea</infohash>
<guid>https://academictorrents.com/details/6d266b7a39716a1bda3ba35241334abbf554e3ea</guid>
<link>https://academictorrents.com/details/6d266b7a39716a1bda3ba35241334abbf554e3ea</link>
<description>This paper contributes an argumentation line for how technological features of ontologies lead to benefits for enterprise applications. Although many features are also available in precursory or alternative technologies, we claim that combinations of specific features are uniquely provided by ontologies. A careful elicitation of the available features therefore is a prerequisite for the argumentation line. As a second contribution, this paper reports on several challenges that frequently occur when trying to adopt ontologies in existing enterprise settings. These challenges have to be contrasted with the often overstressed benefits in Semantic Web literature. Together with reports for several SAP Research case studies, this paper channels back experiences in applying ontologies to the Semantic Web community. As a third contribution, we give several recommendations for future research directions based on the gathered experiences.</description>
<size>1018491</size>
</item><item>
<title>Distantly Supervised Web Relation Extraction for Knowledge Base Population</title>
<category>Paper</category>
<infohash>af875ebea3596abe77096d46b96be86960596198</infohash>
<guid>https://academictorrents.com/details/af875ebea3596abe77096d46b96be86960596198</guid>
<link>https://academictorrents.com/details/af875ebea3596abe77096d46b96be86960596198</link>
<description>Extracting information from Web pages for populating large, cross-domain knowledge bases requires methods which are suitable across domains, do not require manual effort to adapt to new domains, are able to deal with noise, and integrate information extracted from different Web pages. Recent approaches have used existing knowledge bases to learn to extract information with promising results, one of those approaches being distant supervision. Distant supervision is an unsupervised method which uses background information from the Linking Open Data cloud to automatically label sentences with relations to create training data for relation classifiers. In this paper we propose the use of distant supervision for relation extraction from the Web. Although the method is promising, existing approaches are still not suitable for Web extraction as they suffer from three main issues: data sparsity, noise and lexical ambiguity. Our approach reduces the impact of data sparsity by making entity recognition tools more robust across domains and extracting relations across sentence boundaries using unsupervised co- reference resolution methods. We reduce the noise caused by lexical ambiguity by employing statistical methods to strategically select training data. To combine information extracted from multiple sources for populating knowledge bases we present and evaluate several information integration strategies and show that those benefit immensely from additional relation mentions extracted using co-reference resolution, increasing precision by 8%. We further show that strategically selecting training data can increase precision by a further 3%.</description>
<size>229538</size>
</item><item>
<title>MIT OCW 6.451 Principles of Digital Communication II Spring 05</title>
<category>Course</category>
<infohash>83259dc0c7dbd50073113cb1aa7a3e574faca70c</infohash>
<guid>https://academictorrents.com/details/83259dc0c7dbd50073113cb1aa7a3e574faca70c</guid>
<link>https://academictorrents.com/details/83259dc0c7dbd50073113cb1aa7a3e574faca70c</link>
<description>==Course Description This course is the second of a two-term sequence with 6.450. The focus is on coding techniques for approaching the Shannon limit of additive white Gaussian noise (AWGN) channels, their performance analysis, and design principles. After a review of 6.450 and the Shannon limit for AWGN channels, the course begins by discussing small signal constellations, performance analysis and coding gain, and hard-decision and soft-decision decoding. It continues with binary linear block codes, Reed-Muller codes, finite fields, Reed-Solomon and BCH codes, binary linear convolutional codes, and the Viterbi algorithm. More advanced topics include trellis representations of binary linear block codes and trellis-based decoding; codes on graphs; the sum-product and min-sum algorithms; the BCJR algorithm; turbo codes, LDPC codes and RA codes; and performance of LDPC codes with iterative decoding. Finally, the course addresses coding for the bandwidth-limited regime, including lattice codes, trellis-coded modulation, multilevel coding and shaping. If time permits, it covers equalization of linear Gaussian channels. Lecture 1: Introduction Sampling Theorem Lecture 2: Performance of Small Signal Constellations Lecture 3: Hard-decision and Soft-decision Decoding Lecture 4: Hard-decision and Soft-decision Decoding Lecture 5: Introduction to Binary Block Codes Lecture 6: Introduction to Binary Block Codes Lecture 7: Introduction to Finite Fields Lecture 8: Introduction to Finite Fields Lecture 9: Introduction to Finite Fields Lecture 10: Reed-Solomon Codes Lecture 11: Reed-Solomon Codes Lecture 12: Reed-Solomon Codes Lecture 13: Introduction to Convolutional Codes Lecture 14: Introduction to Convolutional Codes Lecture 15: Trellis Representations of Binary Linear Block Codes Lecture 16: Trellis Representations of Binary Linear Block Codes Lecture 17: Codes on Graphs Lecture 18: Codes on Graphs Lecture 19: The Sum-Product Algorithm Lecture 20: Turbo, LDPC, and RA Codes Lecture 21: Turbo, LDPC, and RA Codes Lecture 22: Lattice and Trellis Codes Lecture 23: Lattice and Trellis Codes Lecture 24: Linear Gaussian Channels Lecture 25: Linear Gaussian Channels</description>
<size>12641790915</size>
</item><item>
<title>Antiutopias</title>
<category>Paper</category>
<infohash>eaf07274a16fa05bf47419201664f387573be54e</infohash>
<guid>https://academictorrents.com/details/eaf07274a16fa05bf47419201664f387573be54e</guid>
<link>https://academictorrents.com/details/eaf07274a16fa05bf47419201664f387573be54e</link>
<description>Antiutopias na literatura e no cinema no século 20.</description>
<size>823582</size>
</item><item>
<title>Mars Image - sol710 MastCamRight Composite (Terms of use example)</title>
<category>Dataset</category>
<infohash>83a4dd7e764dfd34bca103be8b6094a9bd346d30</infohash>
<guid>https://academictorrents.com/details/83a4dd7e764dfd34bca103be8b6094a9bd346d30</guid>
<link>https://academictorrents.com/details/83a4dd7e764dfd34bca103be8b6094a9bd346d30</link>
<description>This is a sample upload that demonstrates how to require terms of use to be accepted by setting the terms attribute in the bibtex. By simply adding a terms attribute, the terms are displayed and downloading a .torrent file is restricted until the user has checked the box indicating they accept the terms.</description>
<size>7875599</size>
</item><item>
<title>BRUNO LATOUR - Ciencia em Acao</title>
<category>Paper</category>
<infohash>2b1efe2a4be5514412846b3f7266be2237a276d3</infohash>
<guid>https://academictorrents.com/details/2b1efe2a4be5514412846b3f7266be2237a276d3</guid>
<link>https://academictorrents.com/details/2b1efe2a4be5514412846b3f7266be2237a276d3</link>
<description>BRUNO LATOUR - Ciência em Ação - texto para acompanhamento da leitura</description>
<size>216554</size>
</item><item>
<title>RITCHIE - PERSEUS</title>
<category>Paper</category>
<infohash>ae29b232f77c11ebbb34668e324904d781f27f91</infohash>
<guid>https://academictorrents.com/details/ae29b232f77c11ebbb34668e324904d781f27f91</guid>
<link>https://academictorrents.com/details/ae29b232f77c11ebbb34668e324904d781f27f91</link>
<description>PERSEUS, Frank Ritcie -Edição Bilíngue Latim-Portugues</description>
<size>32854</size>
</item><item>
<title>Mahmoud Darwish.pdf</title>
<category>Paper</category>
<infohash>fa4823498b3a3dcd64efda85fd532a65307b16e8</infohash>
<guid>https://academictorrents.com/details/fa4823498b3a3dcd64efda85fd532a65307b16e8</guid>
<link>https://academictorrents.com/details/fa4823498b3a3dcd64efda85fd532a65307b16e8</link>
<description>A graduation project about Mahmoud Darwish( one of the most prominent poets in Palestine and Arabic world).</description>
<size>718603</size>
</item><item>
<title>Reddit Public Comments (2007-10 through 2015-05)</title>
<category>Dataset</category>
<infohash>7690f71ea949b868080401c749e878f98de34d3d</infohash>
<guid>https://academictorrents.com/details/7690f71ea949b868080401c749e878f98de34d3d</guid>
<link>https://academictorrents.com/details/7690f71ea949b868080401c749e878f98de34d3d</link>
<description>~1.7 billion JSON comment objects from reddit.com complete with the comment, score, author, subreddit, position in comment tree and other fields that are available through Reddit s API.</description>
<size>160678037702</size>
</item><item>
<title>OpenMIIR RawEEG v1.0</title>
<category>Dataset</category>
<infohash>c18c04a9f18ff7d133421012978c4a92f57f6b9c</infohash>
<guid>https://academictorrents.com/details/c18c04a9f18ff7d133421012978c4a92f57f6b9c</guid>
<link>https://academictorrents.com/details/c18c04a9f18ff7d133421012978c4a92f57f6b9c</link>
<description>Music imagery information retrieval (MIIR) systems may one day be able to recognize a song just as we think of it. As a step towards such technology, we are presenting a public domain dataset of electroencephalography (EEG) recordings taken during music perception and imagination. We acquired this data during an ongoing study that so far comprised 10 subjects listening to and imagining 12 short music fragments - each 7s-16s long - taken from well-known pieces. These stimuli were selected from different genres and systematically span several musical dimensions such as meter, tempo and the presence of lyrics. This way, various retrieval and classification scenarios can be addressed. The dataset is primarily aimed to enable music information retrieval researchers interested in these new MIIR challenges to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. We also hope that the OpenMIIR dataset will facilitate a stronger interdisciplinary collaboration between music information retrieval researchers and neuroscientists.</description>
<size>6995284395</size>
</item><item>
<title>The SFU Mountain Dataset: Semi-Structured Woodland Trails Under Changing Environmental Conditions</title>
<category>Dataset</category>
<infohash>e3d6b8d9e87cab68c7947e800e337e58fc8d8e59</infohash>
<guid>https://academictorrents.com/details/e3d6b8d9e87cab68c7947e800e337e58fc8d8e59</guid>
<link>https://academictorrents.com/details/e3d6b8d9e87cab68c7947e800e337e58fc8d8e59</link>
<description>Note: If you want only a subset of the data presented here (just the ROS bags, but not the extracted JPEGs and CSVs for example, since there is some duplication), you can set your torrent client to download only the files you are interested in. License: This data is licensed under the Creative Commons - Attribution 4.0 International License: https://creativecommons.org/licenses/by/4.0. We present a novel long-term dataset of semistructured woodland terrain under varying lighting and weather conditions and with changing vegetation, infrastructure, and pedestrian traffic. This dataset is intended to aid the development of field robotics algorithms for long-term deployment in challenging outdoor environments. It includes more than 8 hours of trail navigation, with more available in the future as the environment changes. The data consist of readings from calibrated and synchronized sensors operating at 5 Hz to 50 Hz in the form of color stereo and grayscale monocular camera images, vertical and push-broom laser scans, GPS locations, wheel odometry, inertial measurements, and barometric pressure values. Each traversal covers approximately 4 km across three diverse woodland trail environments, and we have recorded under four different lighting and weather conditions to date: dry; wet; dusk; night. We also provide 383 hand-matched location correspondences between traversals as ground-truth for benchmarking place recognition and mapping algorithms. This paper describes the configuration of the vehicle, the trail environments covered, and the format of the data we provide.</description>
<size>521694907462</size>
</item><item>
<title>CT data of simple and thick gooseneck joint before and after testing</title>
<category>Dataset</category>
<infohash>643b613662df5a8a6f7e9e4cf330eedcbce37403</infohash>
<guid>https://academictorrents.com/details/643b613662df5a8a6f7e9e4cf330eedcbce37403</guid>
<link>https://academictorrents.com/details/643b613662df5a8a6f7e9e4cf330eedcbce37403</link>
<description>CT data of simple and thick gooseneck joint specimen before and after testing. Part of the MSc thesiwork: A new concept to join members in frame construction with CNC-fabricated beams and nodes: strength test and failure analysis.</description>
<size>1937220791</size>
</item><item>
<title>Displacements of simple and thick gooseneck joint compressive and tensile testing</title>
<category>Dataset</category>
<infohash>93be7f385c59078a6524fc7a06dae033508f2f6f</infohash>
<guid>https://academictorrents.com/details/93be7f385c59078a6524fc7a06dae033508f2f6f</guid>
<link>https://academictorrents.com/details/93be7f385c59078a6524fc7a06dae033508f2f6f</link>
<description>Displacement measured by linear displacement sensor of simple and thick gooseneck joint testing. Part of the MSc thesis: A new concept to join members in frame construction with CNC-fabricated beams and nodes - strength test and failure analysis</description>
<size>572448079</size>
</item><item>
<title>Digital image correlation (DIC): Strains and displacements of tested simple and thick gooseneck joint</title>
<category>Dataset</category>
<infohash>f4f6869f9317055bf83bfc188962bafac221ba45</infohash>
<guid>https://academictorrents.com/details/f4f6869f9317055bf83bfc188962bafac221ba45</guid>
<link>https://academictorrents.com/details/f4f6869f9317055bf83bfc188962bafac221ba45</link>
<description>Formated and raw data for displacements and strains of simple and thick gooseneck joint testing calculated via digital image correlation (DIC) with Ncorr Part of the MSc thesis: A new concept to join members in frame construction with CNC-fabricated beams and nodes - strength test and failure analysis</description>
<size>15743975254</size>
</item><item>
<title>Stanford EE364A - Convex Optimization I - Boyd</title>
<category>Course</category>
<infohash>393dc896234b96a1cd251c14cfc65d2ff594d6e9</infohash>
<guid>https://academictorrents.com/details/393dc896234b96a1cd251c14cfc65d2ff594d6e9</guid>
<link>https://academictorrents.com/details/393dc896234b96a1cd251c14cfc65d2ff594d6e9</link>
<description>Catalog description Concentrates on recognizing and solving convex optimization problems that arise in applications. Convex sets, functions, and optimization problems. Basics of convex analysis. Least-squares, linear and quadratic programs, semidefinite programming, minimax, extremal volume, and other problems. Optimality conditions, duality theory, theorems of alternative, and applications. Interior-point methods. Applications to signal processing, statistics and machine learning, control and mechanical engineering, digital and analog circuit design, and finance. Course objectives to give students the tools and training to recognize convex optimization problems that arise in applications to present the basic theory of such problems, concentrating on results that are useful in computation to give students a thorough understanding of how such problems are solved, and some experience in solving them to give students the background required to use the methods in their own research work or applications Videos 1. Introduction 2. Convex sets 3. Convex functions 4. Convex optimization problems 5. Duality 6. Approximation and fitting 7. Statistical estimation 8. Geometric problems 9. Numerical linear algebra background 10. Unconstrained minimization 11. Equality constrained minimization 12. Interior-point methods 13. Conclusions</description>
<size>4458038066</size>
</item><item>
<title>Caltech CS156 - Machine Learning - Yaser</title>
<category>Course</category>
<infohash>8190b5122515ab158cd29ccdb33ea946a3e529f4</infohash>
<guid>https://academictorrents.com/details/8190b5122515ab158cd29ccdb33ea946a3e529f4</guid>
<link>https://academictorrents.com/details/8190b5122515ab158cd29ccdb33ea946a3e529f4</link>
<description>##Outline This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion: * What is learning? * Can a machine learn? * How to do it? * How to do it well? * Take-home lessons. Lecture 01 - The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem. Lecture 02 - Is Learning Feasible? Can we generalize from a limited sample to the entire space? Relationship between in-sample and out-of-sample. Lecture 03 - The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms. Lecture 04 - Error and Noise - The principled choice of error measures. What happens when the target we want to learn is noisy. Lecture 05 - Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize? Lecture 06 - Theory of Generalization - How an infinite model can learn from a finite sample. The most important theoretical result in machine learning. Lecture 07 - The VC Dimension - A measure of what it takes a model to learn. Relationship to the number of parameters and degrees of freedom. Lecture 08 - Bias-Variance Tradeoff - Breaking down the learning performance into competing quantities. The learning curves. Lecture 09 - The Linear Model II - More about linear models. Logistic regression, maximum likelihood, and gradient descent. Lecture 10 - Neural Networks - A biologically inspired model. The efficient backpropagation learning algorithm. Hidden layers. Lecture 11 - Overfitting - Fitting the data too well; fitting the noise. Deterministic noise versus stochastic noise. Lecture 12 - Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay. Lecture 13 - Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation. Lecture 14 - Support Vector Machines - One of the most successful learning algorithms; getting a complex model at the price of a simple one. Lecture 15 - Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins. Lecture 16 - Radial Basis Functions - An important learning model that connects several machine learning models and techniques. Lecture 17 - Three Learning Principles - Major pitfalls for machine learning practitioners; Occam?s razor, sampling bias, and data snooping. Lecture 18 - Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods.</description>
<size>3358232906</size>
</item><item>
<title>Stanford CS229 - Machine Learning - Andrew Ng</title>
<category>Course</category>
<infohash>da90dedfb78190e5c62af1ad40a2413cb918457f</infohash>
<guid>https://academictorrents.com/details/da90dedfb78190e5c62af1ad40a2413cb918457f</guid>
<link>https://academictorrents.com/details/da90dedfb78190e5c62af1ad40a2413cb918457f</link>
<description># Course Description This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include: supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines); unsupervised learning (clustering, dimensionality reduction, kernel methods); learning theory (bias/variance tradeoffs; VC theory; large margins); reinforcement learning and adaptive control. The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing. # Prerequisites Students are expected to have the following background: Knowledge of basic computer science principles and skills, at a level sufficient to write a reasonably non-trivial computer program. Familiarity with the basic probability theory. (CS109 or Stat116 is sufficient but not necessary.) Familiarity with the basic linear algebra (any one of Math 51, Math 103, Math 113, or CS 205 would be much more than necessary.) Introduction (1 class) * Basic concepts. Supervised learning. (7 classes) * Supervised learning setup. LMS. * Logistic regression. Perceptron. Exponential family. * Generative learning algorithms. Gaussian discriminant analysis. Naive Bayes. * Support vector machines. * Model selection and feature selection. * Ensemble methods: Bagging, boosting. * Evaluating and debugging learning algorithms. Learning theory. (3 classes) * Bias/variance tradeoff. Union and Chernoff/Hoeffding bounds. * VC dimension. Worst case (online) learning. * Practical advice on how to use learning algorithms. Unsupervised learning. (5 classes) * Clustering. K-means. * EM. Mixture of Gaussians. * Factor analysis. * PCA (Principal components analysis). * ICA (Independent components analysis). Reinforcement learning and control. (4 classes) * MDPs. Bellman equations. * Value iteration and policy iteration. * Linear quadratic regulation (LQR). LQG. * Q-learning. Value function approximation. * Policy search. Reinforce. POMDPs. https://i.imgur.com/c7Pjt1G.png</description>
<size>4211379788</size>
</item><item>
<title>Enwiki Word2vec model 1000 Dimensions</title>
<category>Dataset</category>
<infohash>5d18911e7036870197bf5e23cf1be96d3353518a</infohash>
<guid>https://academictorrents.com/details/5d18911e7036870197bf5e23cf1be96d3353518a</guid>
<link>https://academictorrents.com/details/5d18911e7036870197bf5e23cf1be96d3353518a</link>
<description>Gensim Word2vec model built on the english wikipedia, 1000dimensions, 10cbow, no stemming</description>
<size>8629955743</size>
</item><item>
<title>codeing</title>
<category>Dataset</category>
<infohash>ef6710cc7b71285563a4df9f1bbba910315b51d6</infohash>
<guid>https://academictorrents.com/details/ef6710cc7b71285563a4df9f1bbba910315b51d6</guid>
<link>https://academictorrents.com/details/ef6710cc7b71285563a4df9f1bbba910315b51d6</link>
<description/>
<size>3168758858</size>
</item><item>
<title>StuCosRec 2014</title>
<category>Paper</category>
<infohash>3e727529cafc8d6f867474a9a8b4cc0761b92ed7</infohash>
<guid>https://academictorrents.com/details/3e727529cafc8d6f867474a9a8b4cc0761b92ed7</guid>
<link>https://academictorrents.com/details/3e727529cafc8d6f867474a9a8b4cc0761b92ed7</link>
<description>Proceedings of the 2014 1st Student Computer Science Research Conference.</description>
<size>4391405</size>
</item><item>
<title>A collection of sport activity files for data analysis and data mining</title>
<category>Dataset</category>
<infohash>aac04fca4cd3b4dcd580e9018d68fa0647b7d908</infohash>
<guid>https://academictorrents.com/details/aac04fca4cd3b4dcd580e9018d68fa0647b7d908</guid>
<link>https://academictorrents.com/details/aac04fca4cd3b4dcd580e9018d68fa0647b7d908</link>
<description>Dataset consists of the data produced by nine cyclists. Data were directly exported from their Strava or Garmin Connect accounts. Data format of sport s activities could be written in GPX or TCX form, which are basically the XML formats adapted to specific purposes. From each dataset, many following information can be obtained: GPS location, elevation, duration, distance, average and maximal heart rate, while some workouts include also data obtained from power meters.</description>
<size>316182217</size>
</item><item>
<title>An Introduction to Computer Networks</title>
<category>Paper</category>
<infohash>958e2487d2db5f41f9c056bb35cf547edf38528f</infohash>
<guid>https://academictorrents.com/details/958e2487d2db5f41f9c056bb35cf547edf38528f</guid>
<link>https://academictorrents.com/details/958e2487d2db5f41f9c056bb35cf547edf38528f</link>
<description>An Introduction to Computer Networks, a free and open general-purpose computer-networking textbook, complete with diagrams and exercises. It covers the LAN, internetworking and transport layers, focusing primarily on TCP/IP. Particular attention is paid to congestion; other special topics include queuing, real-time traffic, network management, security and the ns simulator. The book is suitable as the primary text for an undergraduate or introductory graduate course in computer networking, as a supplemental text for a wide variety of network-related courses, and as a reference work.</description>
<size>4870326</size>
</item><item>
<title>Language-independent classifier-based modelling of source-side context information in Statistical Machine Translation (Data set)</title>
<category>Dataset</category>
<infohash>3cc11d076c62bfa0480afc187a4bb6e4759e5c16</infohash>
<guid>https://academictorrents.com/details/3cc11d076c62bfa0480afc187a4bb6e4759e5c16</guid>
<link>https://academictorrents.com/details/3cc11d076c62bfa0480afc187a4bb6e4759e5c16</link>
<description>We present a series of experiments focusing on the modelling of source-side context to improve Phrase-based Statistical Machine Translation. Statistical Machine Translation systems typically consist of a translation model and a language model. The former maps phrases in the source language to the target language, without regard for the context in which the source phrases occur. The latter models just the target language, and acts as a target-side model of context information after translation. We attempt to independently reproduce a line of existing research and test whether considering context information directly in the translation model has a positive effect on translation quality.  We furthermore investigate various ways discriminative classifier-based models can be integrated into Statical Machine Translation.  We will use proven techniques from Word Sense Disambiguation, effectively integrating these techniques in Statistical Machine Translation. Our approach is language-independent and knowledge-poor: we do not employ any explicit linguistic features computed by part-of-speech taggers, word sense disambiguation systems, supertaggers, or parsers, as used by previous work. We find only limited improvement of translation quality for certain formulaic corpora and conclude that explicit modelling of source-side context information does not add much to the data already implicitly available in the decode process.</description>
<size>182966549</size>
</item><item>
<title>Wrappers for Feature Subset Selection</title>
<category>Paper</category>
<infohash>121f0a89e8229d8d65749beaabbf4580009963d4</infohash>
<guid>https://academictorrents.com/details/121f0a89e8229d8d65749beaabbf4580009963d4</guid>
<link>https://academictorrents.com/details/121f0a89e8229d8d65749beaabbf4580009963d4</link>
<description>In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular domain, a feature subset selection method should consider how the algorithm and the training data interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show improvements over the original design. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter-based approach to feature subset selection. Significant improvement in accuracy on real problems is achieved for the two families of induction algorithms used: decision trees and Naive-Bayes.</description>
<size>4035806</size>
</item><item>
<title>Netflix Prize Data Set </title>
<category>Dataset</category>
<infohash>9b13183dc4d60676b773c9e2cd6de5e5542cee9a</infohash>
<guid>https://academictorrents.com/details/9b13183dc4d60676b773c9e2cd6de5e5542cee9a</guid>
<link>https://academictorrents.com/details/9b13183dc4d60676b773c9e2cd6de5e5542cee9a</link>
<description>This is the official data set used in the Netflix Prize competition. The data consists of about 100 million movie ratings, and the goal is to predict missing entries in the movie-user rating matrix. |Attribute| Value| |&amp;mdash;&amp;mdash;|&amp;mdash;-| | Data Set Characteristics:  | Multivariate, Time-Series      | | Attribute Characteristics: | Integer                      | | Associated Tasks:          | Clustering, Recommender-Systems | | Number of Instances:       | 100480507                     | | Number of Attributes:      | 17770                    | | Missing Values?            | Yes                           | | Area:                      | N/A                                   | #Data Set Information: This dataset was constructed to support participants in the Netflix Prize. There are over 480,000 customers in the dataset, each identified by a unique integer id. The title and release year for each movie is also provided. There are over 17,000 movies in the dataset, each identified by a unique integer id. The dataset contains over 100 million ratings. The ratings were collected between October 1998 and December 2005 and reflect the distribution of all ratings received during this period. Each rating has a customer id, a movie id, the date of the rating, and the value of the rating. As part of the original Netflix Prize a set of ratings was identified whose rating values were not provided in the original dataset. The object of the Prize was to accurately predict the ratings from this  qualifying  set. These missing ratings are now available in the grand_prize.tar.gz dataset file. #Attribute Information: The format of the data is described fully in the README files contained in the dataset tar files. |Attribute| Value| |-|-| |MovieID: | Arbitrarily assigned unique integer in the range [1 .. 17770]. | |CustomerID:  |Arbitrarily assigned unique integer in the range [1..2649429] (with gaps). | |Rating:  |Number of  stars  assigned to a movie by a customer; an integer from 1 to 5. | |Title: | English language title of the movie on the Netflix website. | |YearOfRelease:  |Year a movie was released in the range [1890..2005]. May correspond to the release of corresponding DVD, not necessarily its theaterical release. | |Date: | Timestamp of a rating in the form YYYY-MM-DD, in the range 1998-11-01 to  2005-12-31. | |NetflixID: | Integer ID of a movie as currently used in the Netflix developer API | #Relevant Papers: James Bennett and Stan Lanning.  The Netflix Prize , 2007. http://rexa.info/paper/4755326FDAE3929649348DC380A46D3882A98198</description>
<size>697552028</size>
</item><item>
<title>MIT OCW 7.014 - Introductory Biology</title>
<category>Course</category>
<infohash>a2b8a74967e5166ab55bf5f248af8d564c39f795</infohash>
<guid>https://academictorrents.com/details/a2b8a74967e5166ab55bf5f248af8d564c39f795</guid>
<link>https://academictorrents.com/details/a2b8a74967e5166ab55bf5f248af8d564c39f795</link>
<description>Course Highlights This course features a complete set of video lectures by Professor Graham Walker, a Howard Hughes Medical Institute (HHMI) professor and director of the HHMI Education group at MIT, and Professor Sallie W. Chisholm, Lee and Geraldine Martin Professor of Environmental Studies and co-director of the MIT Earth Systems Initiative. Education development efforts for these introductory biology courses are one of many activities conducted by the HHMI Education Group at MIT. This group focuses on curriculum development work for creating teaching tools in undergraduate biology courses. Course Description The MIT Biology Department core courses, 7.012, 7.013, and 7.014, all cover the same core material, which includes the fundamental principles of biochemistry, genetics, molecular biology, and cell biology. Biological function at the molecular level is particularly emphasized and covers the structure and regulation of genes, as well as, the structure and synthesis of proteins, how these molecules are integrated into cells, and how these cells are integrated into multicellular systems and organisms. In addition, each version of the subject has its own distinctive material. 7.014 focuses on the application of these fundamental principles, toward an understanding of microorganisms as geochemical agents responsible for the evolution and renewal of the biosphere and of their role in human health and disease. Acknowledgements The study materials, problem sets, and quiz materials used during Spring 2005 for 7.014 include contributions from past instructors, teaching assistants, and other members of the MIT Biology Department affiliated with course 7.014. Since the following works have evolved over a period of many years, no single source can be attributed.</description>
<size>6021455122</size>
</item><item>
<title>MIT OCW 18.03 - Mathematics - Differential Equations</title>
<category>Course</category>
<infohash>e7ef009c1929f074ea556bad005ddca0d99ac7aa</infohash>
<guid>https://academictorrents.com/details/e7ef009c1929f074ea556bad005ddca0d99ac7aa</guid>
<link>https://academictorrents.com/details/e7ef009c1929f074ea556bad005ddca0d99ac7aa</link>
<description>##Course Description Differential Equations are the language in which the laws of nature are expressed. Understanding properties of solutions of differential equations is fundamental to much of contemporary science and engineering. Ordinary differential equations (ODE s) deal with functions of one variable, which can often be thought of as time. ##Prerequisites/Corequisites 18.03 Differential Equations has 18.01 Single Variable Calculus as a prerequisite. 18.02 Multivariable Calculus is a corequisite, meaning students can take 18.02 and 18.03 simultaneously. ##Texts Buy at Amazon Edwards, C., and D. Penney. Elementary Differential Equations with Boundary Value Problems. 6th ed. Upper Saddle River, NJ: Prentice Hall, 2003. ISBN: 9780136006138. Note: The 5th Edition (Buy at Amazon ISBN: 9780131457744) will serve as well. Students also need two sets of notes "18.03: Notes and Exercises" by Arthur Mattuck, and "18.03 Supplementary Notes" by Haynes Miller. ##Description This course is a study of Ordinary Differential Equations (ODE s), including modeling physical systems. ##Topics include: * Solution of First-order ODE s by Analytical, Graphical and Numerical Methods; * Linear ODE s, Especially Second Order with Constant Coefficients; * Undetermined Coefficients and Variation of Parameters; * Sinusoidal and Exponential Signals: Oscillations, Damping, Resonance; * Complex Numbers and Exponentials; * Fourier Series, Periodic Solutions; * Delta Functions, Convolution, and Laplace Transform Methods; * Matrix and First-order Linear Systems: Eigenvalues and Eigenvectors; and * Non-linear Autonomous Systems: Critical Point Analysis and Phase Plane Diagrams. ##Format The lecture period is used to help students gain expertise in understanding, constructing, solving, and interpreting differential equations. * Students must come to lecture prepared to participate actively. At the first recitation, students are given a set of flashcards to bring to each lecture. They are used during class sessions to vote on answers to questions posed occasionally in the lecture. In case of divided opinions, a discussion follows. As a further element of active participation in class, students will often be asked to spend a minute responding to a short feedback question at the end of the lecture. ##Recitations These small groups meet twice a week to discuss and gain experience with the course material. Even more than the lectures, the recitations involve active participation. The recitation leader may begin by asking for questions or hand out problems to work on in small groups. Students are encouraged to ask questions early and often. Recitation leaders also hold office hours. ##Tutoring Another resource of great value to students is the tutoring room. This is staffed by experienced undergraduates. Extra staff is added before hour exams. This is a good place to go to work on homework. ##The Ten Essential Skills Students should strive for personal mastery over the following skills. These are the skills that are used in other courses at MIT. This list of skills is widely disseminated among the faculty teaching courses listing 18.03 as a prerequisite. At the moment, 140 courses at MIT list 18.03 as a prerequisite or a corequisite. Model a simple system to obtain a first order ODE. Visualize solutions using direction fields and isoclines, and approximate them using Euler s method. Solve a first order linear ODE by the method of integrating factors or variation of parameter. Calculate with complex numbers and exponentials. Solve a constant coefficient second order linear initial value problem with driving term exponential times polynomial. If the input signal is sinusoidal, compute amplitude gain and phase shift. Compute Fourier coefficients, and find periodic solutions of linear ODEs by means of Fourier series. Utilize Delta functions to model abrupt phenomena, compute the unit impulse response, and express the system response to a general signal by means of the convolution integral. Find the weight function or unit impulse response and solve constant coefficient linear initial value problems using the Laplace transform together with tables of standard values. Relate the pole diagram of the transfer function to damping characteristics and the frequency response curve. Calculate eigenvalues, eigenvectors, and matrix exponentials, and use them to solve first order linear systems. Relate first order systems with higher-order ODEs. Recreate the phase portrait of a two-dimensional linear autonomous system from trace and determinant. Determine the qualitative behavior of an autonomous nonlinear two-dimensional system by means of an analysis of behavior near critical points. The Ten Essential Skills is also available as a (PDF). ##Homework Each homework assignment has two parts: a first part drawn from the book or notes, and a second part consisting of problems which will be handed out. Both parts are keyed closely to the lectures. Students should form the habit of doing the relevant problems between successive lectures and not try to do the whole set the night before they are due.</description>
<size>6319300522</size>
</item><item>
<title>MIT OCW 8.03 - Physics III - Vibrations and Waves</title>
<category>Course</category>
<infohash>724648552d517756117b47b3a7f5f62962f2629e</infohash>
<guid>https://academictorrents.com/details/724648552d517756117b47b3a7f5f62962f2629e</guid>
<link>https://academictorrents.com/details/724648552d517756117b47b3a7f5f62962f2629e</link>
<description>Course Description &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; 8.03 Classical theory of vibration and waves. In addition to the traditional topics of mechanical vibrations and waves, coupled oscillators, and electro-magnetic radiation, students will also learn about musical instruments, red sunsets, glories, coronae, rainbows, haloes, X-ray binaries, neutron stars, black holes and big-bang cosmology. Highlights of this Course &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This course features a full set of lecture videos, as well as assignments, exams, and other course materials. This is a full set of video lectures, recorded at MIT. The home page of the  original lectures can be found at MIT OCW. This torrent is just a transcode of the original 220 Kbps .rm files to mpeg4, 15 fps, 320 x 240 h.264 @ 144 Kbps, with 32 Kbps mono AAC sound, so that it  can for example be easily played on a phone during your daily commute.</description>
<size>2636137989</size>
</item><item>
<title>MIT OCW 8.01 - Physics I - Classical Mechanics</title>
<category>Course</category>
<infohash>f231c62635aadfb0e4d1f45ddc7b5b6c5592b275</infohash>
<guid>https://academictorrents.com/details/f231c62635aadfb0e4d1f45ddc7b5b6c5592b275</guid>
<link>https://academictorrents.com/details/f231c62635aadfb0e4d1f45ddc7b5b6c5592b275</link>
<description>This is a full set of video lectures, recorded at MIT. The home page of the  original lectures can be found at MIT OCW: http://ocw.mit.edu/ This torrent is just a transcode of the original 220 Kbps .rm files to mpeg4, 15 fps, 320 x 240 h.264 @ 144 Kbps, with 32 Kbps mono AAC sound, so that it  can for example be easily played on a phone during your daily commute. Course description &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- 8.01 is a first-semester freshman physics class in Newtonian Mechanics, Fluid Mechanics, and Kinetic Gas Theory. In addition to the basic concepts of Newtonian Mechanics, Fluid Mechanics, and Kinetic Gas Theory, a variety of interesting topics are covered in this course: Binary Stars, Neutron Stars, Black Holes, Resonance Phenomena, Musical Instruments, Stellar Collapse, Supernovae, Astronomical observations from very high flying balloons (lecture 35), and you will be allowed a peek into the intriguing Quantum World. Highlights of this Course &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- This course features lecture notes, problem sets with solutions, exams with solutions, links to related resources, and a complete set of videotaped lectures. The 35 video lectures by Professor Lewin, were recorded on the MIT campus during the Fall of 1999. Prof. Lewin is well-known at MIT and beyond for his dynamic and  engaging lecture style. License &amp;mdash;&amp;mdash;&amp;mdash;- These lectures are generously put on-line by MIT, and are licensed under the Creative Commons License (BY-NC-SA) and so it is perfectly legal to share them. Therefore, please seed as long a possible, to ensure this amazing resource stays available.</description>
<size>2412434092</size>
</item><item>
<title>A Local Greedy Scheduling Scheme with Provable Performance Guarantee</title>
<category>Paper</category>
<infohash>996055f238d9d838c88245f57b783cabd0ac4583</infohash>
<guid>https://academictorrents.com/details/996055f238d9d838c88245f57b783cabd0ac4583</guid>
<link>https://academictorrents.com/details/996055f238d9d838c88245f57b783cabd0ac4583</link>
<description>In recent years, there have been many efforts to develop low-complexity scheduling schemes that can approximate optimal performance in multi-hop wireless networks. A centralized sub-optimal scheduling policy, called Greedy Maximal Scheduling (GMS) is a good candidate because it achieves high throughput. However, its distributed realization requires O(|V|) complexity, which becomes a major obstacle for practical implementation, where |V| is the number of nodes in the network. In this paper, we develop a simple distributed scheduling policy for multi-hop wireless networks. It achieves O(log |V|) complexity by relaxing the global ordering requirement of GMS. Instead, it deterministically schedules only links that have the largest queue length among their local neighbors. We show that, it still guarantees a fraction of the optimal performance, which is no smaller than GMS. We also further improve its performance and address some important implementation issues. The simulation results confirm that the new scheduling scheme achieves the performance equivalent of GMS and significantly outperforms state-of-the-art distributed random access scheduling policies.</description>
<size>401859</size>
</item><item>
<title>Reducing the Sampling Complexity of Topic Models</title>
<category>Paper</category>
<infohash>6bede719062dd637e9977a7d9aa36655ef250885</infohash>
<guid>https://academictorrents.com/details/6bede719062dd637e9977a7d9aa36655ef250885</guid>
<link>https://academictorrents.com/details/6bede719062dd637e9977a7d9aa36655ef250885</link>
<description/>
<size>1320837</size>
</item><item>
<title>Simple and Deterministic Matrix Sketching</title>
<category>Paper</category>
<infohash>a48d1910d9c5fc7a17c196e43504134a9cf955f4</infohash>
<guid>https://academictorrents.com/details/a48d1910d9c5fc7a17c196e43504134a9cf955f4</guid>
<link>https://academictorrents.com/details/a48d1910d9c5fc7a17c196e43504134a9cf955f4</link>
<description>We adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives the rows of a large matrix A ∈ R n×m one after the other in a streaming fashion. ...</description>
<size>455509</size>
</item><item>
<title>Recovering from Selection Bias in Causal and Statistical Inference</title>
<category>Paper</category>
<infohash>94e2258d146e3fe336b2cd25c8b3d1f67f5b829c</infohash>
<guid>https://academictorrents.com/details/94e2258d146e3fe336b2cd25c8b3d1f67f5b829c</guid>
<link>https://academictorrents.com/details/94e2258d146e3fe336b2cd25c8b3d1f67f5b829c</link>
<description>Selection bias is caused by preferential exclusion of units from the samples and represents a major obstacle to valid causal and statistical inferences; it cannot be removed by randomized experiments and can rarely be detected in either experimental or observational studies. In this paper, we provide complete graphical and algorithmic conditions for recovering conditional probabilities from selection biased data. We also provide graphical conditions for recoverability when unbiased data is available over a subset of the variables. Finally, we provide a graphical condition that generalizes the backdoor criterion and serves to recover causal effects when the data is collected under preferential selection.</description>
<size>1472232</size>
</item><item>
<title>20150112.json.gz</title>
<category>Dataset</category>
<infohash>466d6a3794328acc7c068a45f0380ef3ade8345f</infohash>
<guid>https://academictorrents.com/details/466d6a3794328acc7c068a45f0380ef3ade8345f</guid>
<link>https://academictorrents.com/details/466d6a3794328acc7c068a45f0380ef3ade8345f</link>
<description>JSON dump of the Wikidata of 2015-01-12.</description>
<size>3908362534</size>
</item><item>
<title>ClueWeb09_Anchors (anchor text derived from CMU's ClueWeb09 web crawl)</title>
<category>Dataset</category>
<infohash>bb36fce78df3609627ec3495ca4fa37c28fcee18</infohash>
<guid>https://academictorrents.com/details/bb36fce78df3609627ec3495ca4fa37c28fcee18</guid>
<link>https://academictorrents.com/details/bb36fce78df3609627ec3495ca4fa37c28fcee18</link>
<description>Anchor texts extracted from ClueWeb09 https://djoerdhiemstra.com/2010/anchor-text-for-clueweb09-category-a/</description>
<size>24455903094</size>
</item><item>
<title>ClueWeb12_Anchors (anchor text derived from CMU's ClueWeb12 web crawl) </title>
<category>Dataset</category>
<infohash>8ecbbc8360a2d8b6438000ebf257ed06e2eaeb20</infohash>
<guid>https://academictorrents.com/details/8ecbbc8360a2d8b6438000ebf257ed06e2eaeb20</guid>
<link>https://academictorrents.com/details/8ecbbc8360a2d8b6438000ebf257ed06e2eaeb20</link>
<description>Anchor texts extracted from ClueWeb12 https://djoerdhiemstra.com/2013/anchor-text-for-clueweb12/</description>
<size>30349856980</size>
</item><item>
<title>arxiv_src_1411</title>
<category>Dataset</category>
<infohash>87974031f70a27a5ad556c68b4d7275b692abb33</infohash>
<guid>https://academictorrents.com/details/87974031f70a27a5ad556c68b4d7275b692abb33</guid>
<link>https://academictorrents.com/details/87974031f70a27a5ad556c68b4d7275b692abb33</link>
<description/>
<size>7263500288</size>
</item><item>
<title>arxiv_src_1410</title>
<category>Dataset</category>
<infohash>c5d6a5ce3f0be00931c381e95872e6b71d10c59f</infohash>
<guid>https://academictorrents.com/details/c5d6a5ce3f0be00931c381e95872e6b71d10c59f</guid>
<link>https://academictorrents.com/details/c5d6a5ce3f0be00931c381e95872e6b71d10c59f</link>
<description>arXiv_src_1410_001.tar  arXiv_src_1410_002.tar  arXiv_src_1410_003.tar  arXiv_src_1410_004.tar arXiv_src_1410_005.tar  arXiv_src_1410_006.tar  arXiv_src_1410_007.tar  arXiv_src_1410_008.tar arXiv_src_1410_009.tar  arXiv_src_1410_010.tar  arXiv_src_1410_011.tar arXiv_src_1410_012.tar  arXiv_src_1410_013.tar  arXiv_src_1410_014.tar arXiv_src_1410_015.tar  arXiv_src_1410_016.tar  arXiv_src_1410_017.tar</description>
<size>8570695680</size>
</item><item>
<title>arXiv_src_1401_001.tar</title>
<category>Dataset</category>
<infohash>0e59d1a963a30986ce8bd35de3549a9f40a3c408</infohash>
<guid>https://academictorrents.com/details/0e59d1a963a30986ce8bd35de3549a9f40a3c408</guid>
<link>https://academictorrents.com/details/0e59d1a963a30986ce8bd35de3549a9f40a3c408</link>
<description/>
<size>526254080</size>
</item><item>
<title>arXiv_src_1401_002.tar</title>
<category>Dataset</category>
<infohash>d01554bd7191c90197f3d0ec90718231c0ba957a</infohash>
<guid>https://academictorrents.com/details/d01554bd7191c90197f3d0ec90718231c0ba957a</guid>
<link>https://academictorrents.com/details/d01554bd7191c90197f3d0ec90718231c0ba957a</link>
<description/>
<size>522280960</size>
</item><item>
<title>arXiv_src_1401_004.tar</title>
<category>Dataset</category>
<infohash>a2e195c38ac5dd13de62e3816dc46b93c90035ff</infohash>
<guid>https://academictorrents.com/details/a2e195c38ac5dd13de62e3816dc46b93c90035ff</guid>
<link>https://academictorrents.com/details/a2e195c38ac5dd13de62e3816dc46b93c90035ff</link>
<description/>
<size>524974080</size>
</item><item>
<title>arXiv_src_1401_003.tar</title>
<category>Dataset</category>
<infohash>d237b33d354a97864b659cc4e1293e75018bc1dc</infohash>
<guid>https://academictorrents.com/details/d237b33d354a97864b659cc4e1293e75018bc1dc</guid>
<link>https://academictorrents.com/details/d237b33d354a97864b659cc4e1293e75018bc1dc</link>
<description/>
<size>535429120</size>
</item><item>
<title>WISE All-Sky Release Catalog</title>
<category>Dataset</category>
<infohash>eb0c2b4686a4c5e84f70938210c1c117771b40c8</infohash>
<guid>https://academictorrents.com/details/eb0c2b4686a4c5e84f70938210c1c117771b40c8</guid>
<link>https://academictorrents.com/details/eb0c2b4686a4c5e84f70938210c1c117771b40c8</link>
<description>The WISE All-Sky Release Source Catalog is available for bulk download in compressed (gzip or bzip2) ASCII form. Catalog records are lines in a simple bar-delimited format. Users should be aware that the Catalog table is extremely large. Due to its size the complete Catalog has been split into 50 parts, each roughly 16GB in size (gzip-compressed parts are 5.5-5.7GB each, bzip2-compressed parts are 4.3-4.6GB each). You will need approximately 280 GB of disk space to download all 50 gzipped Catalog parts, or 225 GB of disk space to download all 50 bzip2-ed Catalog parts, and an additional 805 GB to uncompress the parts. Concatenating the parts results in a Catalog with 563,921,584 records. The full uncompressed Catalog file is 864,201,356,616 bytes. Each Catalog part contains records corresponding to a specified declination range. If a user desires only the sources within a specified dec range, that user may choose to download only those Catalog parts corresponding to that range. The dec range covered by each of the Catalog parts is indicated in the table below. The Catalog is searchable online through IRSA s General Catalog Search service (Gator), accessible from the main IRSA site at http://irsa.ipac.caltech.edu. Catalog Retrieval Instructions Download all 50 Catalog parts in bzip2 or gzip format from the links below, or use the wget scripts provided for either gzip or bzip2 format. Optional: After download, use these MD5 checksum files to verify the download: gzip md5 file, or bzip2 md5 file. Next, decompress all 50 files using bzip2 or gunzip (depending on format). The files can then be loaded into a database individually, or concatenated to produce the complete Catalog as a single 805GB file. The details of this depend on the user s software. Users may also choose to download only those Catalog parts corresponding to desired dec ranges, using the individual download links in the table below. Documentation Complete documentation of the WISE All-Sky Release including the Source Catalog is contained in the Explanatory Supplement. The Catalog format is detailed here: Catalog format and column description A simple schema for the Catalog is here: schema Acknowledgments If you use IRSA in your research, please include the following acknowledgment in your paper: "This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration." Please include the following standard acknowledgment in any published material that makes use of data products from the primary WISE mission such as the Source Catalog: "This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration." Contact For assistance please contact IRSA user support using the "Helpdesk" link at: http://irsa.ipac.caltech.edu, or click here: http://irsa.ipac.caltech.edu/applications/Helpdesk</description>
<size>237115189877</size>
</item><item>
<title>A general method applicable to the search for similarities in the amino acid sequence of two proteins </title>
<category>Paper</category>
<infohash>8c6a6a95236461d9e249a820a6d67cf3dbf13dc0</infohash>
<guid>https://academictorrents.com/details/8c6a6a95236461d9e249a820a6d67cf3dbf13dc0</guid>
<link>https://academictorrents.com/details/8c6a6a95236461d9e249a820a6d67cf3dbf13dc0</link>
<description>"A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed. From these findings it is possible to determine whether significant homology exists between the proteins. This information is used to trace their possible evolutionary development. The maximum match is a number dependent upon the similarity of the sequences. One of its definitions is the largest number of amino acids of one protein that can be matched with those of a second protein allowing for all possible interruptions in either of the sequences. While the interruptions give rise to a very large number of comparisons, the method efficiently excludes from consideration those comparisons that cannot contribute to the maximum match. Comparisons are made from the smallest unit of significance, a pair of amino acids, one from each protein. All possible pairs are represented by a two-dimensional array, and all possible comparisons are represented by pathways through the array. For this maximum match only certain of the possible pathways must be evaluated. A numerical value, one in this case, is assigned to every cell in the array representing like amino acids. The maximum match is the largest number that would result from summing the cell values of every pathway. "</description>
<size>641724</size>
</item><item>
<title>The Role of Context information in L2 Translation Assistance (Data Set)</title>
<category>Dataset</category>
<infohash>ab6e61059b7f7879e027ca33fb0f9e82980cd855</infohash>
<guid>https://academictorrents.com/details/ab6e61059b7f7879e027ca33fb0f9e82980cd855</guid>
<link>https://academictorrents.com/details/ab6e61059b7f7879e027ca33fb0f9e82980cd855</link>
<description>We investigate to what extent L2 context information can aid the translation of L1 fragments in an L2 context, and what techniques are most suitable. The task is framed in the context of second language learning, where translation assistance systems enable language learners to write in their target language whilst allowing them to fall back to their native language in case the correct word or expression is not known. These code switches are subsequently translated to L2 given the L2 context. We focus on two approaches: a classifier-based approach, and one rooted in Statistical Machine Translation. Various mixtures between the two are investigated. In doing so, we provide valuable insights on how to best tackle the task presented at SemEval 2014.  We zoom in on the role of context information (in L2) and of the L2 language model, and investigate the incorporation of memory-based classifiers as a means of better disambiguating the L1 fragments. We find Statistical Machine Translation to be the most adequate solution to the problem, and show how it can be applied with a cross-lingual context. Integrating classifiers in such a framework may lead to small improvements in translation quality, but there is considerable overlap with the benefits of the L2 language model.</description>
<size>1991662610</size>
</item><item>
<title>The Extended Yale Face Database B</title>
<category>Dataset</category>
<infohash>06e479f338b56fa5948c40287b66f68236a14612</infohash>
<guid>https://academictorrents.com/details/06e479f338b56fa5948c40287b66f68236a14612</guid>
<link>https://academictorrents.com/details/06e479f338b56fa5948c40287b66f68236a14612</link>
<description>The extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions. The data format of this database is the same as http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html the Yale Face Database B. Please refer to the homepage of the Yale Face Database B for more detailed information of the data format. You are free to use the extended Yale Face Database B for research purposes. All publications which use this database should acknowledge the use of "the Exteded Yale Face Database B" and reference Athinodoros Georghiades, Peter Belhumeur, and David Kriegman s paper, "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", PAMI, 2001. The extended database as opposed to the original Yale Face Database B with 10 subjects was first reported by Kuang-Chih Lee, Jeffrey Ho, and David Kriegman in "Acquiring Linear Subspaces for Face Recognition under Variable Lighting, PAMI, May, 2005 http://vision.ucsd.edu/~leekc/papers/9pltsIEEE.pdf All test image data used in the experiments are manually aligned, cropped, and then re-sized to 168x192 images. If you publish your experimental results with the cropped images, please reference the PAMI2005 paper as well.</description>
<size>2086248732</size>
</item><item>
<title>Yale YouTube Video Text</title>
<category>Dataset</category>
<infohash>156802226bcf5747e0bea4e4f14c03b3b952de80</infohash>
<guid>https://academictorrents.com/details/156802226bcf5747e0bea4e4f14c03b3b952de80</guid>
<link>https://academictorrents.com/details/156802226bcf5747e0bea4e4f14c03b3b952de80</link>
<description>YouTube Video Text (YVT) contains 30 videos. Each video has 15-second length, 30 frames per second, HD 720p quality and was collected from YouTube. The text content in the dataset can be divided into two categories, overlay text (e.g., captions, songs title, logos) and scene text (e.g. street signs, business signs, words on shirt).</description>
<size>434765881</size>
</item><item>
<title>The Extended Yale Face Database B (Cropped)</title>
<category>Dataset</category>
<infohash>aad8bf8e6ee5d8a3bf46c7ab5adfacdd8ad36247</infohash>
<guid>https://academictorrents.com/details/aad8bf8e6ee5d8a3bf46c7ab5adfacdd8ad36247</guid>
<link>https://academictorrents.com/details/aad8bf8e6ee5d8a3bf46c7ab5adfacdd8ad36247</link>
<description>This is the cropped version of "The Extended Yale Face Database B" The extended Yale Face Database B contains 16128 images of 28 human subjects under 9 poses and 64 illumination conditions. The data format of this database is the same as http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html the Yale Face Database B. Please refer to the homepage of the Yale Face Database B for more detailed information of the data format. You are free to use the extended Yale Face Database B for research purposes. All publications which use this database should acknowledge the use of "the Exteded Yale Face Database B" and reference Athinodoros Georghiades, Peter Belhumeur, and David Kriegman s paper, "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", PAMI, 2001. The extended database as opposed to the original Yale Face Database B with 10 subjects was first reported by Kuang-Chih Lee, Jeffrey Ho, and David Kriegman in "Acquiring Linear Subspaces for Face Recognition under Variable Lighting, PAMI, May, 2005 http://vision.ucsd.edu/~leekc/papers/9pltsIEEE.pdf All test image data used in the experiments are manually aligned, cropped, and then re-sized to 168x192 images. If you publish your experimental results with the cropped images, please reference the PAMI2005 paper as well.</description>
<size>58493820</size>
</item><item>
<title>MNIST Database</title>
<category>Dataset</category>
<infohash>ce990b28668abf16480b8b906640a6cd7e3b8b21</infohash>
<guid>https://academictorrents.com/details/ce990b28668abf16480b8b906640a6cd7e3b8b21</guid>
<link>https://academictorrents.com/details/ce990b28668abf16480b8b906640a6cd7e3b8b21</link>
<description>The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field. With some classification methods (particuarly template-based methods, such as SVM and K-nearest neighbors), the error rate improves when the digits are centered by bounding box rather than center of mass. If you do this kind of pre-processing, you should report it in your publications. The MNIST database was constructed from NIST s Special Database 3 and Special Database 1 which contain binary images of handwritten digits. NIST originally designated SD-3 as their training set and SD-1 as their test set. However, SD-3 is much cleaner and easier to recognize than SD-1. The reason for this can be found on the fact that SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students. Drawing sensible conclusions from learning experiments requires that the result be independent of the choice of training set and test among the complete set of samples. Therefore it was necessary to build a new database by mixing NIST s datasets. The MNIST training set is composed of 30,000 patterns from SD-3 and 30,000 patterns from SD-1. Our test set was composed of 5,000 patterns from SD-3 and 5,000 patterns from SD-1. The 60,000 pattern training set contained examples from approximately 250 writers. We made sure that the sets of writers of the training set and test set were disjoint. SD-1 contains 58,527 digit images written by 500 different writers. In contrast to SD-3, where blocks of data from each writer appeared in sequence, the data in SD-1 is scrambled. Writer identities for SD-1 is available and we used this information to unscramble the writers. We then split SD-1 in two: characters written by the first 250 writers went into our new training set. The remaining 250 writers were placed in our test set. Thus we had two sets with nearly 30,000 examples each. The new training set was completed with enough examples from SD-3, starting at pattern # 0, to make a full set of 60,000 training patterns. Similarly, the new test set was completed with SD-3 examples starting at pattern # 35,000 to make a full set with 60,000 test patterns. Only a subset of 10,000 test images (5,000 from SD-1 and 5,000 from SD-3) is available on this site. The full 60,000 sample training set is available. Many methods have been tested with this training set and test set. Here are a few examples. Details about the methods are given in an upcoming paper. Some of those experiments used a version of the database where the input images where deskewed (by computing the principal axis of the shape that is closest to the vertical, and shifting the lines so as to make it vertical). In some other experiments, the training set was augmented with artificially distorted versions of the original training samples. The distortions are random combinations of shifts, scaling, skewing, and compression. FILE FORMATS FOR THE MNIST DATABASE The data is stored in a very simple file format designed for storing vectors and multidimensional matrices. General info on this format is given at the end of this page, but you don t need to read that to use the data files. All the integers in the files are stored in the MSB first (high endian) format used by most non-Intel processors. Users of Intel processors and other low-endian machines must flip the bytes of the header. There are 4 files: train-images-idx3-ubyte: training set images train-labels-idx1-ubyte: training set labels t10k-images-idx3-ubyte:  test set images t10k-labels-idx1-ubyte:  test set labels The training set contains 60000 examples, and the test set 10000 examples. The first 5000 examples of the test set are taken from the original NIST training set. The last 5000 are taken from the original NIST test set. The first 5000 are cleaner and easier than the last 5000. TRAINING SET LABEL FILE (train-labels-idx1-ubyte): [offset] [type]          [value]          [description] 0000     32 bit integer  0x00000801(2049) magic number (MSB first) 0004     32 bit integer  60000            number of items 0008     unsigned byte   ??               label 0009     unsigned byte   ??               label ........ xxxx     unsigned byte   ??               label The labels values are 0 to 9. TRAINING SET IMAGE FILE (train-images-idx3-ubyte): [offset] [type]          [value]          [description] 0000     32 bit integer  0x00000803(2051) magic number 0004     32 bit integer  60000            number of images 0008     32 bit integer  28               number of rows 0012     32 bit integer  28               number of columns 0016     unsigned byte   ??               pixel 0017     unsigned byte   ??               pixel ........ xxxx     unsigned byte   ??               pixel Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black). TEST SET LABEL FILE (t10k-labels-idx1-ubyte): [offset] [type]          [value]          [description] 0000     32 bit integer  0x00000801(2049) magic number (MSB first) 0004     32 bit integer  10000            number of items 0008     unsigned byte   ??               label 0009     unsigned byte   ??               label ........ xxxx     unsigned byte   ??               label The labels values are 0 to 9. TEST SET IMAGE FILE (t10k-images-idx3-ubyte): [offset] [type]          [value]          [description] 0000     32 bit integer  0x00000803(2051) magic number 0004     32 bit integer  10000            number of images 0008     32 bit integer  28               number of rows 0012     32 bit integer  28               number of columns 0016     unsigned byte   ??               pixel 0017     unsigned byte   ??               pixel ........ xxxx     unsigned byte   ??               pixel Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black). THE IDX FILE FORMAT the IDX file format is a simple format for vectors and multidimensional matrices of various numerical types. The basic format is magic number size in dimension 0 size in dimension 1 size in dimension 2 ..... size in dimension N data The magic number is an integer (MSB first). The first 2 bytes are always 0. The third byte codes the type of the data: 0x08: unsigned byte 0x09: signed byte 0x0B: short (2 bytes) 0x0C: int (4 bytes) 0x0D: float (4 bytes) 0x0E: double (8 bytes) The 4-th byte codes the number of dimensions of the vector/matrix: 1 for vectors, 2 for matrices.... The sizes in each dimension are 4-byte integers (MSB first, high endian, like in most non-Intel processors). The data is stored like in a C array, i.e. the index in the last dimension changes the fastest.</description>
<size>11594722</size>
</item><item>
<title>AMPLITUDE PONTOS DISCREPANTES TAMANHO DE AMOSTRA EM ESTATISTICA.pdf</title>
<category>Paper</category>
<infohash>d28b65b192671ee9324c2def9f21f32fa63fccc2</infohash>
<guid>https://academictorrents.com/details/d28b65b192671ee9324c2def9f21f32fa63fccc2</guid>
<link>https://academictorrents.com/details/d28b65b192671ee9324c2def9f21f32fa63fccc2</link>
<description>No presente trabalho são desenvolvidos métodos para a determinação da amplitude (intervalo, "range") de uma amostra, identificação de pontos discrepantes ("outliers") e determinação do tamanho da amostra ("sample size"). Estes assuntos são pouco ventilados em livros-texto de estatística e as soluções oferecidas nem sempre são satisfatórias, quando aplicadas a problemas de engenharia. Apresentamos algumas alternativas, mais baseadas no bom senso do que em conhecimento teórico. O trabalho está dividido em três partes, cada uma relativa a um tema. O objeto é sempre a análise de amostras aleatórias de uma variável contínua com distribuição aproximadamente normal.</description>
<size>1695807</size>
</item><item>
<title>NLCD2006 Land Cover (2011 Edition) nlcd_2006_landcover_2011_edition_2014_03_31.zip</title>
<category>Dataset</category>
<infohash>081cae4ec8ce93a6b86ea1b55a4cca113a257593</infohash>
<guid>https://academictorrents.com/details/081cae4ec8ce93a6b86ea1b55a4cca113a257593</guid>
<link>https://academictorrents.com/details/081cae4ec8ce93a6b86ea1b55a4cca113a257593</link>
<description>The most recent 2011 Edition of NLCD 2006 land cover layer for the conterminous United States for all pixels. National Land Cover Database 2006 (NLCD2006) is a 16-class land cover classification scheme that has been applied consistently across the conterminous United States at a spatial resolution of 30 meters. NLCD2006 is based primarily on the unsupervised classification of Landsat Enhanced Thematic Mapper+ (ETM+) circa 2006 satellite data. NLCD2006 also quantifies land cover change between the years 2001 to 2006. The NLCD2006 land cover change product was generated by comparing spectral characteristics of Landsat imagery between 2001 and 2006, on an individual path/row basis, using protocols to identify and label change based on the trajectory from NLCD2001 products. It represents the first time this type of 30 meter resolution land cover change product has been produced for the conterminous United States. A formal accuracy assessment of the NLCD2006 land cover change product is planned for 2011. Generation of NLCD2006 products helped to identify some issues in the NLCD2001 land cover and percent developed imperviousness products only (there were no changes to the NLCD2001 percent canopy). These issues were evaluated and corrected, necessitating a reissue of NLCD2001 products (NLCD2001 Version 2.0) as part of the NLCD2006 release. A majority of the NLCD2001 updates occurred in coastal mapping zones where NLCD2001 was published prior to the completion of the National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (C-CAP) 2001 land cover products. NOAA C-CAP 2001 land cover has now been seamlessly integrated with NLCD2001 land cover for all coastal zones. NLCD2001 percent developed imperviousness was also updated as part of this process.</description>
<size>1093422220</size>
</item><item>
<title>MPEG-7 Core Experiment CE-Shape-1 [tar.gz]</title>
<category>Dataset</category>
<infohash>0a8cb3446b0de5690fee29a2c68922ff691c7f9a</infohash>
<guid>https://academictorrents.com/details/0a8cb3446b0de5690fee29a2c68922ff691c7f9a</guid>
<link>https://academictorrents.com/details/0a8cb3446b0de5690fee29a2c68922ff691c7f9a</link>
<description>Here are the first shapes from each class in the MPEG-7 Core Experiment CE-Shape-1 Test Set. MPEG-7 Core Experiment CE-Shape-1 [?] is a popular database for shape matching evaluation consisting of 70 shape categories, where each category is represented by 20 different images with high intra-class variability. The shapes are defined by a binary mask outlining the objects. The evaluation protocol for this retrieval task is the bullseye rating, in which each image is used as reference and compared to all of the other images. The mean percentage of correct images in the top 40 matches (the 40 images with the lowest shape similarity values) is taken as bullseye rating. The Latecki group maintains an overview of recent results here: http://knight.cis.temple.edu/ shape/MPEG7/results.html. Download MPEG-7 Core Experiment CE-Shape-1 http://www.cis.temple.edu/ latecki/TestData/mpeg7shapeB.tar.gz Note: It raises interesting questions how to define the shape of an object, as there are very similar objects (apples and device9) in two categories, however the octopus category has much larger intra-class variances and is still the same category. $ tar -ztvf mpeg7shapeB.tar.gz -rwxrwxrwx  0 latecki users    1723 Nov 12  1999 original/Bone-1.gif -rwxrwxrwx  0 latecki users    1819 Nov 12  1999 original/Bone-10.gif -rwxrwxrwx  0 latecki users    1745 Nov 12  1999 original/Bone-11.gif -rwxrwxrwx  0 latecki users    1738 Nov 12  1999 original/Bone-12.gif -rwxrwxrwx  0 latecki users    1322 Nov 12  1999 original/Bone-13.gif -rwxrwxrwx  0 latecki users    1720 Nov 12  1999 original/Bone-14.gif -rwxrwxrwx  0 latecki users    1654 Nov 12  1999 original/Bone-15.gif -rwxrwxrwx  0 latecki users    1759 Nov 12  1999 original/Bone-16.gif -rwxrwxrwx  0 latecki users    1739 Nov 12  1999 original/Bone-17.gif -rwxrwxrwx  0 latecki users    1489 Nov 12  1999 original/Bone-18.gif -rwxrwxrwx  0 latecki users    1772 Nov 12  1999 original/Bone-19.gif -rwxrwxrwx  0 latecki users    1714 Nov 12  1999 original/Bone-2.gif -rwxrwxrwx  0 latecki users    1459 Nov 12  1999 original/Bone-20.gif -rwxrwxrwx  0 latecki users    1759 Nov 12  1999 original/Bone-3.gif ........... -rwxrwxrwx  0 latecki users    1664 Nov 12  1999 original/watch-17.gif -rwxrwxrwx  0 latecki users    1873 Nov 12  1999 original/watch-18.gif -rwxrwxrwx  0 latecki users    1881 Nov 12  1999 original/watch-19.gif -rwxrwxrwx  0 latecki users    2720 Nov 12  1999 original/watch-2.gif -rwxrwxrwx  0 latecki users    1889 Nov 12  1999 original/watch-20.gif -rwxrwxrwx  0 latecki users    1699 Nov 12  1999 original/watch-3.gif -rwxrwxrwx  0 latecki users    1757 Nov 12  1999 original/watch-4.gif -rwxrwxrwx  0 latecki users    1802 Nov 12  1999 original/watch-5.gif -rwxrwxrwx  0 latecki users    1765 Nov 12  1999 original/watch-6.gif -rwxrwxrwx  0 latecki users    1840 Nov 12  1999 original/watch-7.gif -rwxrwxrwx  0 latecki users    1927 Nov 12  1999 original/watch-8.gif -rwxrwxrwx  0 latecki users    1719 Nov 12  1999 original/watch-9.gif</description>
<size>2268576</size>
</item><item>
<title>MPEG-7 Core Experiment CE-Shape-1</title>
<category>Dataset</category>
<infohash>0f9ac75f2d9e2ce2ef7b800aa23882915f4e31fa</infohash>
<guid>https://academictorrents.com/details/0f9ac75f2d9e2ce2ef7b800aa23882915f4e31fa</guid>
<link>https://academictorrents.com/details/0f9ac75f2d9e2ce2ef7b800aa23882915f4e31fa</link>
<description>Here are the first shapes from each class in the MPEG-7 Core Experiment CE-Shape-1 Test Set. MPEG-7 Core Experiment CE-Shape-1 [?] is a popular database for shape matching evaluation consisting of 70 shape categories, where each category is represented by 20 different images with high intra-class variability. The shapes are defined by a binary mask outlining the objects. The evaluation protocol for this retrieval task is the bullseye rating, in which each image is used as reference and compared to all of the other images. The mean percentage of correct images in the top 40 matches (the 40 images with the lowest shape similarity values) is taken as bullseye rating. The Latecki group maintains an overview of recent results here: http://knight.cis.temple.edu/ shape/MPEG7/results.html. Download MPEG-7 Core Experiment CE-Shape-1 http://www.cis.temple.edu/ latecki/TestData/mpeg7shapeB.tar.gz Note: It raises interesting questions how to define the shape of an object, as there are very similar objects (apples and device9) in two categories, however the octopus category has much larger intra-class variances and is still the same category.</description>
<size>3425218</size>
</item><item>
<title>fossilbook.pdf</title>
<category>Paper</category>
<infohash>082d7de40654debaf26b14b88a431228208259f5</infohash>
<guid>https://academictorrents.com/details/082d7de40654debaf26b14b88a431228208259f5</guid>
<link>https://academictorrents.com/details/082d7de40654debaf26b14b88a431228208259f5</link>
<description>There are plenty of open-source version control systems available on the internet these days. What makes Fossil worthy of attention? Bug Tracking And Wiki - In addition to doing distributed version control like Git and Mercurial, Fossil also supports distributed bug tracking, distributed wiki, and a distributed blog mechanism all in a single integrated package. Web Interface - Fossil has a built-in and easy-to-use web interface that simplifies project tracking and promotes situational awareness. Simply type "fossil ui" from within any check-out and Fossil automatically opens your web browser in a page that gives detailed graphical history and status information on that project. This entire website (except the download page) is just a running instance of Fossil. The pages you see here are all wiki or embedded documentation. When you clone Fossil from one of its self-hosting repositories, you get more than just source code - you get this entire website. Autosync - Fossil supports "autosync" mode which helps to keep projects moving forward by reducing the amount of needless forking and merging often associated with distributed projects. Self-Contained - Fossil is a single stand-alone executable that contains everything needed to do configuration management. Installation is trivial: simply download a precompiled binary for Linux, Mac, or Windows and put it on your $PATH. Easy-to-compile source code is available for users on other platforms. Fossil sources are also mostly self-contained, requiring only the standard C library to build. Simple Networking - Fossil uses plain old HTTP (with proxy support) for all network communications, meaning that it works fine from behind restrictive firewalls. The protocol is bandwidth efficient to the point that Fossil can be used comfortably over a dial-up internet connection. CGI/SCGI Enabled - No server is required to use fossil. But a server does make collaboration easier. Fossil supports four different yet simple server configurations. The most popular is a 2-line CGI script. This is the approach used by the self-hosting fossil repositories. Robust &amp; Reliable - Fossil stores content using an enduring file format in an SQLite database so that transactions are atomic even if interrupted by a power loss or system crash. Furthermore, automatic self-checks verify that all aspects of the repository are consistent prior to each commit. In over six years of operation, no work has ever been lost after having been committed to a Fossil repository.</description>
<size>3474536</size>
</item><item>
<title>iCubWorld1.0 dataset</title>
<category>Dataset</category>
<infohash>40bc001de97101552a2974ed880bafa377e055f5</infohash>
<guid>https://academictorrents.com/details/40bc001de97101552a2974ed880bafa377e055f5</guid>
<link>https://academictorrents.com/details/40bc001de97101552a2974ed880bafa377e055f5</link>
<description>This is the first release of the iCubWorld dataset. It consists of seven instances of objects acquired in the two different modalities: human and robot as defined earlier. The size of the images is 320x240 subsequently cropped to the bounding box size according to the following: Human mode: the bounding box is set to 80x80; Robot mode: the bounding box is set to 160x160. The kinematics of the robot is known and used to position the bounding box. The independent motion detector method is used to position the bounding box in the human mode. We provide 500 images per class during the training phase and 500 images per class for the testing phase. Archive:  iCubWorld1.0.zip Length      Date    Time    Name &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-  &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; &amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash; 0  2013-03-20 16:23   iCubWorld1.0/human/ 0  2013-03-20 16:32   iCubWorld1.0/human/test/ 0  2013-03-20 16:29   iCubWorld1.0/human/test/bottle/ 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000000.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000001.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000002.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000003.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000004.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000005.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000006.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000007.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000008.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000009.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000010.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000011.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000012.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000013.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000014.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000015.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000016.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000017.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000018.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000019.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000020.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000021.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000022.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000023.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000024.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000025.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000026.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000027.ppm 19213  2012-09-11 09:52   iCubWorld1.0/human/test/bottle/00000028.ppm ... 76815  2012-09-09 16:27   iCubWorld1.0/robot/train/turtle/00003497.ppm 76815  2012-09-09 16:27   iCubWorld1.0/robot/train/turtle/00003498.ppm 76815  2012-09-09 16:27   iCubWorld1.0/robot/train/turtle/00003499.ppm 0  2013-03-20 16:33   iCubWorld1.0/ &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-                     &amp;mdash;&amp;mdash;&amp;mdash;- 651282550                     14035 files</description>
<size>504764932</size>
</item><item>
<title>iCubWorld dataset</title>
<category>Dataset</category>
<infohash>aacb968fdc3f30c637cb246ca8f26f806998bc83</infohash>
<guid>https://academictorrents.com/details/aacb968fdc3f30c637cb246ca8f26f806998bc83</guid>
<link>https://academictorrents.com/details/aacb968fdc3f30c637cb246ca8f26f806998bc83</link>
<description>10 categories, 40 objects acquired in human mode (see definition above) for the training phase. Data for the testing phase have been collected both in human and robot mode. The acquisition size is 640x480 and subsequently cropped to the bounding box of the object according to the kinematics or motion cue. The bounding box is 160x160 in humand mode and 320x320 in robot mode. For each object we provide 200 training samples. Each category is trained with 3 objects (600 examples per category). The following tests are provided in order to assess performance: Demonstrator ? During the test we change the demonstrator, the objects are known instances; Categorization ? We test new instances of objects; Robot ? We let the robot to grasp the objects and to recognize their category. For this test we use both known and unknown instances; Background ? We select 10 images per category where the classifiers perform 99% of accuracy. We provide the segmentation mask of the objects and we test if the classifier recognizes the object or the background. Archive:  iCubWorld.zip Length      Date    Time    Name &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-  &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash; &amp;mdash;&amp;mdash;-   &amp;mdash;&amp;mdash; 0  2013-04-10 16:26   iCubWorld/test/ 0  2013-04-10 19:42   iCubWorld/test/background/ 0  2013-04-10 16:26   iCubWorld/test/background/bananas/ 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000000.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000001.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000002.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000003.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000004.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000005.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000006.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000007.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000008.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bananas/00000009.ppm 0  2013-04-10 16:26   iCubWorld/test/background/bananas/mask/ 4796  2013-03-30 12:35   iCubWorld/test/background/bananas/mask/00000000.png 5078  2013-03-30 12:40   iCubWorld/test/background/bananas/mask/00000001.png 4718  2013-03-30 12:41   iCubWorld/test/background/bananas/mask/00000002.png 4685  2013-03-30 12:43   iCubWorld/test/background/bananas/mask/00000003.png 4697  2013-03-30 12:44   iCubWorld/test/background/bananas/mask/00000004.png 5024  2013-03-30 12:46   iCubWorld/test/background/bananas/mask/00000005.png 4925  2013-03-30 12:46   iCubWorld/test/background/bananas/mask/00000006.png 4900  2013-03-30 12:52   iCubWorld/test/background/bananas/mask/00000007.png 4938  2013-03-30 12:53   iCubWorld/test/background/bananas/mask/00000008.png 4747  2013-03-30 12:45   iCubWorld/test/background/bananas/mask/00000009.png 0  2013-04-10 16:26   iCubWorld/test/background/bottles/ 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000000.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000001.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000002.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000003.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000004.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000005.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000006.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000007.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000008.ppm 76815  2013-03-29 19:34   iCubWorld/test/background/bottles/00000009.ppm 0  2013-04-10 16:26   iCubWorld/test/background/bottles/mask/ 4911  2013-03-30 13:13   iCubWorld/test/background/bottles/mask/00000000.png 4905  2013-03-30 13:12   iCubWorld/test/background/bottles/mask/00000001.png 4893  2013-03-30 13:11   iCubWorld/test/background/bottles/mask/00000002.png 4851  2013-03-30 13:09   iCubWorld/test/background/bottles/mask/00000003.png 4648  2013-03-30 13:08   iCubWorld/test/background/bottles/mask/00000004.png .... &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-                     &amp;mdash;&amp;mdash;&amp;mdash;- 1022731718                     11103 files</description>
<size>818642156</size>
</item><item>
<title>SUFR ver1.3 2014 synthetic image datasets</title>
<category>Dataset</category>
<infohash>032b2df1f6f0d75817b0f3af2af9bcdb3a415c37</infohash>
<guid>https://academictorrents.com/details/032b2df1f6f0d75817b0f3af2af9bcdb3a415c37</guid>
<link>https://academictorrents.com/details/032b2df1f6f0d75817b0f3af2af9bcdb3a415c37</link>
<description>![](https://sufr2014.files.wordpress.com/2014/01/cropped-sufr_examples_merge3.png) ##SUFR_ver1.3 Joel Z. Leibo, Qianli Liao, and Tomaso Poggio ##Contents: 1. SUFR-W 2. SUFR ##Description: This package contains SUFR-W, a dataset of "in the wild" natural images of faces gathered from the internet. The protocol used to create the dataset is described in Leibo, Liao and  Poggio (2014). It also contains the full set of SUFR synthetic datasets, called the "Subtasks of Unconstrained Face Recognition Challenge" in Leibo, Liao and Poggio (2014). ##Details: ##SUFR-W ** SUFR_in_the_wild/SUFR_in_the_wild_info.mat matlab struct "info" contains two fields: - id :   the ID of the person depicted by each image - name :  the name of the person depicted by each image ** SUFR_in_the_wild/SUFR_in_the_wild_info.txt Contains the same information as SUFR_in_the_wild_info.mat, but in plain text ** SUFR_in_the_wild/splits_10_folds.mat i. matlab struct "sufr_train_val_test_names" contains three fields: train val test Each field contains a 1x10 cell. The i-th element of a cell contains the names(ID) of i-th training/test/validation fold. The "names" are from 1 to 400, they are actually the IDs of the people. ii. matlab struct "sufr_train_val_test" contains three fields: train val test Each field contains a 1x10 cell. The i-th element of a cell contains the training/test/validation pairs (and labels) of the i-th fold. The first two columns are the image indices of the training/test/validation pairs. The last column is the label. 1: same person, -1: different people. ** SUFR_in_the_wild/splits_10_folds_text a folder contains the text version of SUFR_in_the_wild/splits_10_folds.mat ** Note: Similar to the protocol of LFW (Huang et al. 2007), use 10-fold cross validation. Training, validation, and test data are provided for each fold. They are not overlapping &amp;mdash;- individual people appearing in the test set do not appear in the training or validation set. That is, if any image of person X appears in the training set, then no   images of person X will appear in the test set. ##SUFR Each dataset contains the following annotations: **** Information is provided in two formats: .txt and .mat ** info.mat a matlab struct contains: &amp;mdash; sku: 3D model names we used to build the dataset. &amp;mdash; id : object ID of each image &amp;mdash; angle: rotation angle of each image &amp;mdash; ilum: ilumination info &amp;mdash; shift: translation &amp;mdash; scale: size of the face &amp;mdash; affine: the affine transformation matrix &amp;mdash; background: background ID ** info.txt text version of info.mat ** bounding_box_info.txt The bounding box of the face in each image ** splits.mat &amp;mdash; sufr_train_test_sets Training and testing pairs and labels The first two columns are the image indices of the training/test/validation pairs. The last column is the label. 1: same person, -1: different people. &amp;mdash; sufr_train_val_sets Training and validation pairs and labels &amp;mdash; sufr_train_val_test_names Training, validation and testing IDs. The IDs correspond to the "id" field in info.mat ** test.txt, test_names.txt, train.txt, train_names.txt, val.txt, val_names.txt text version of splits.mat ** Note: given the large number of synthetic datasets, we do not require 10-fold cross-validation. The model should be developed only using training and validation sets. The test set should only be used once when reporting results. ##Version history: - This is ver1.3 of SUFR-W. - The following two papers report results on slightly older versions of the dataset. The differences between versions are minor, a few label mistakes were corrected and slightly different training/test splits were used. 1.1:  Liao Q, Leibo JZ, Poggio T.  Learning invariant representations and applications to face verification (2013).  Advances in Neural Information Processing Systems (NIPS). Lake Tahoe, NV. 1.2:  Liao Q, Leibo JZ, Mroueh Y, Poggio T. Can a biologically-plausible hierarchy effectively replace face detection, alignment, and recognition pipelines? (2013) arXiv:1311.4082, November 16, 2013. - There is only one version of the synthetic datasts (this one). - We do NOT anticipate any further changes to either SUFR-W or SUFR. This version (1.3) is the first to be publicly released, so going forward all reported results will be on version 1.3. ##Reference Please cite as: Leibo J. Z., Liao Q., and Poggio T. Subtasks of Unconstrained Faces Recognition (2014). 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. (VISAPP). Lisbon, Portugal. &amp;mdash; Available from: 			http://cbcl.mit.edu/publications/ps/Leibo_Liao_Poggio_VISAPP_2014.pdf &amp;mdash; Presentation available at: http://cbcl.mit.edu/publications/ps/Subtasks_Presentation_VISAPP2014.pdf &amp;mdash; Bibtex at:  				http://www.jzleibo.com/bio/subtasks ##Acknowledgment This material is based upon work supported by the Center for Minds, Brains and Machines (CBMM), funded by NSF STC award CCF-1231216.</description>
<size>5042925615</size>
</item><item>
<title>Documents and Dependencies: an Exploration of Vector Space Models for Semantic Composition</title>
<category>Dataset</category>
<infohash>22e6e99cda7c68a59950c3aee9da845a0637ae85</infohash>
<guid>https://academictorrents.com/details/22e6e99cda7c68a59950c3aee9da845a0637ae85</guid>
<link>https://academictorrents.com/details/22e6e99cda7c68a59950c3aee9da845a0637ae85</link>
<description>This zip should contain 4 files: - README.txt (this file) - doc2Dep20MWU57k_1000concat2000.tab - doc2Dep20MWU57k_1000concat2000.txt - doc2Dep20MWU57k_1000concat2000.mat ****doc2Dep20MWU57k_1000concat2000.tab**** This file contains the 54975 word-units with POS tags.  The order of the words in this file corresponds to the order of the rows in doc2Dep20MWU57k_1000concat2000.tab ****doc2Dep20MWU57k_1000concat2000.tab**** This tab-separated-value file contains the concatenated SVD matrices as created described in  "Documents and Dependencies: an Exploration of Vector Space Models for Semantic Composition"(Fyshe 2013).  The size of the matrix is 54975x2000.  The first 1000 dimensions are Document dimensions, the second 1000 (1001-2000) are Dependency dimensions.  The rows appear in the same order as the word-units in doc2Dep20MWU57k_1000concat2000.txt ****doc2Dep20MWU57k_1000concat2000.mat**** For convenience, this is the data contained in doc2Dep20MWU57k_1000concat2000.tab &amp; doc2Dep20MWU57k_1000concat2000.txt saved into two matlab variables.  count_matrix is the concatenated SVD matrices (tab file), words are the words (txt file). Questions may be directed to Alona Fyshe, afyshe at cs dot cmu dot edu.</description>
<size>1316400676</size>
</item><item>
<title>fMRI Simon task</title>
<category>Dataset</category>
<infohash>dd9880a3c80b9e890c80583008d6e7a73c39cedc</infohash>
<guid>https://academictorrents.com/details/dd9880a3c80b9e890c80583008d6e7a73c39cedc</guid>
<link>https://academictorrents.com/details/dd9880a3c80b9e890c80583008d6e7a73c39cedc</link>
<description>Submitted by picchetti on Thu, 10/06/2011 - 20:54 The "NYU Simon Task" dataset comprises data collected from 21 healthy adults while they performed a rapid event-related Simon task. **Please note that all data have been uploaded regardless of quality- it is up to the user to check for data quality (movement etc). On each trial (inter-trial interval (ITI) was 2.5 seconds, with null events for jitter), a red or green box appeared on the right or left side of the screen. Participants used their left index finger to respond to the presentation of a green box, and their right index finger to respond to the presentation of a red box.In congruent trials the green box appeared on the left or the red box on the right, while in more demanding incongruent trials the green box appeared on the right and the red on the left. Subjects performed two blocks, each containing 48 congruent and 48 incongruent trials, presented in a pre-determined order (as per OptSeq), interspersed with 24 null trials (fixation only). Functional imaging data were acquired using a research dedicated Siemens Allegra 3.0 T scanner, with a standard Siemens head coil, located at the NYU Center for Brain Imaging.</description>
<size>1624229463</size>
</item><item>
<title>fMRI Visual object recognition</title>
<category>Dataset</category>
<infohash>506d160d9f09dd1612f85179f18ea698c04b1f53</infohash>
<guid>https://academictorrents.com/details/506d160d9f09dd1612f85179f18ea698c04b1f53</guid>
<link>https://academictorrents.com/details/506d160d9f09dd1612f85179f18ea698c04b1f53</link>
<description>Submitted by admin on Wed, 10/12/2011 - 17:36</description>
<size>2107434101</size>
</item><item>
<title>fMRI Word and object processing</title>
<category>Dataset</category>
<infohash>0e7413de7f06a00a36433bfd7c0ecf55d74083b4</infohash>
<guid>https://academictorrents.com/details/0e7413de7f06a00a36433bfd7c0ecf55d74083b4</guid>
<link>https://academictorrents.com/details/0e7413de7f06a00a36433bfd7c0ecf55d74083b4</link>
<description>Submitted by admin on Fri, 11/04/2011 - 16:02 Subjects performed a visual one-back with four categories of items: written words, objects, scrambled objects and consonant letter strings. Tasks and Conditions: 001 word one-back task 004 Consonant strings 002 object one-back task 002 Objects 003 Scrambled HideInvestigator Info Investigators: Duncan, K. Pattamadilok, C. Knierim, I. Devlin, J. HidePublications Pubmed Link: Consistency and variability in functional localisers HideStudy Metadata Sample Size: 49 Scanner Type: Siemens 1.5T HideSharing License: PPDL Accession Number: ds000107</description>
<size>3673652049</size>
</item><item>
<title>fMRI Living-nonliving decision with plain or mirror-reversed text</title>
<category>Dataset</category>
<infohash>8ace9bab5c6009d3423d0d5d551381a4db0e092b</infohash>
<guid>https://academictorrents.com/details/8ace9bab5c6009d3423d0d5d551381a4db0e092b</guid>
<link>https://academictorrents.com/details/8ace9bab5c6009d3423d0d5d551381a4db0e092b</link>
<description>Living-nonliving decision with plain or mirror-reversed text Submitted by picchetti on Sun, 12/23/2012 - 19:16 Subjects performed a living-nonliving decision on items presented in either plain or mirror-reversed text.  ds000006A represents the first session and ds000006B represents the second session. Tasks and Conditions: 001 living/nonliving judgment on mirror-reversed and plain-text words 001 Mirror-Switch 002 Mirror-Repeat 003 Plain-Switch 004 Plain-Repeat HideInvestigator Info Investigators: K Jimura E Stover F Cazalis R Poldrack HideStudy Metadata Sample Size: 14 Scanner Type: Siemens Allegra 3T HideSharing License: PPDL Accession Number: ds000006</description>
<size>2696095994</size>
</item><item>
<title>fMRI Mixed-gambles task</title>
<category>Dataset</category>
<infohash>dbb05860807d62039b88c2d425797179fdeb12df</infohash>
<guid>https://academictorrents.com/details/dbb05860807d62039b88c2d425797179fdeb12df</guid>
<link>https://academictorrents.com/details/dbb05860807d62039b88c2d425797179fdeb12df</link>
<description>Submitted by picchetti on Thu, 10/06/2011 - 12:38 Subjects were presented with mixed (gain/loss) gambles, and decided whether they would accept each gamble.  No outcomes of these gambles were presented during scanning, but after the scan three gambles were selected at random and played for real money. Tasks and Conditions: 001 mixed gambles task 002 parametric gain 003 parametric loss 004 distance from indifference HideInvestigator Info Investigators: Tom S.M. Fox C.R. Trepel C. Poldrack R.A. HidePublications Pubmed Link: The neural basis of loss aversion in decision making under risk HideStudy Metadata Sample Size: 16 Scanner Type: 3T Siemens AG (Erlangen, Germany) Allegra MRI scanner HideSharing License: PPDL Accession Number: ds000005</description>
<size>2013537938</size>
</item><item>
<title>fMRI Rhyme judgment</title>
<category>Dataset</category>
<infohash>dd259c92c9b28fb7df3d041c9e4a97e20eb0a608</infohash>
<guid>https://academictorrents.com/details/dd259c92c9b28fb7df3d041c9e4a97e20eb0a608</guid>
<link>https://academictorrents.com/details/dd259c92c9b28fb7df3d041c9e4a97e20eb0a608</link>
<description>Submitted by picchetti on Thu, 10/06/2011 - 11:48 Subjects were presented with pairs of either words or pseudowords, and made rhyming judgments for each pair. Tasks and Conditions: 001 rhyme verification task 001 Word 002 Pseudoword HideInvestigator Info Investigators: Xue, G. Poldrack, R.A. HideStudy Metadata Sample Size: 13 Scanner Type: TBA HideSharing License: PPDL Accession Number: ds000003</description>
<size>454921098</size>
</item><item>
<title>fMRI Classification learning</title>
<category>Dataset</category>
<infohash>7afea3eb89ed37afb45064980b4c532b024ea96e</infohash>
<guid>https://academictorrents.com/details/7afea3eb89ed37afb45064980b4c532b024ea96e</guid>
<link>https://academictorrents.com/details/7afea3eb89ed37afb45064980b4c532b024ea96e</link>
<description>Submitted by picchetti on Thu, 10/06/2011 - 11:36 Subjects performed a classification learning task with two different problems (across different runs), using a "weather prediction" task.  In one (probabilistic) problem, the labels were probabilistically related to each set of cards.  In another (deterministic) problem, the labels were deterministically related to each set of cards.  After learning, subjects participated in an event-related block of judgment only (no feedback) in which they were presented with stimuli from both of the training problems. Tasks and Conditions: 001 Probabilistic classification task 001 Probabilistic classification trials 002 feedback 002 deterministic classification 001 Deterministic classification trials 002 feedback 003 classification probe without feedback 001 Classification trials: Probabilistic 002 Classification trials: Deterministic Investigator Info Investigators: Aron, A.R. Poldrack, R.A. Gluck, M.A. Acknowledgements and Funding: Whitehall Foundation and NSF grant BCS-0223843 to R.A.P. The authors thank Allan J. Tobin and Robert Bilder for helpful discussion and encouragement, Sabrina Tom for scanning and Catherine Myers and Daphna Shohamy for help with task design. Publications Pubmed Link: Long-term test-retest reliability of fMRI Digital Document: Methods.pdf Study Metadata Sample Size: 17 Scanner Type: 3 T Siemens Allegra MRI scanner Sharing License: PPDL Accession Number: ds000002</description>
<size>3343212440</size>
</item><item>
<title>fMRI Balloon Analog Risk-taking Task</title>
<category>Dataset</category>
<infohash>30a8653b160b2c4f8d7e42d2070ebb1438673b2c</infohash>
<guid>https://academictorrents.com/details/30a8653b160b2c4f8d7e42d2070ebb1438673b2c</guid>
<link>https://academictorrents.com/details/30a8653b160b2c4f8d7e42d2070ebb1438673b2c</link>
<description>Submitted by admin on Tue, 07/10/2012 - 16:08 Subjects perform the Balloon Analog Risk-taking Task in an event-related design. Note: The original highres image for sub004 was not available, so the skull-stripped version is included as highres001.nii.gz Tasks and Conditions: 001 Balloon Analogue Risk Task (BART) 001 pumps_fixed 002 pumps_demean 003 pumps_fixed_real_RT 004 cash_fixed 005 cash_demean 006 cash_fixed_real_RT 007 explode_fixed 008 explode_demean 009 control_pumps_fixed 010 control_pumps_demean 011 control_pumps_fixed_real_RT Investigator Info Investigators: Tom Schonberg Christopher Trepel Craig Fox Russell A. Poldrack Acknowledgements and Funding: This work was supported by NSF DMI-0433693 (R. Poldrack and C. Fox, principal investigators, PIs). We would like to thank Elena Stover for assistance with data collection and for helpful comments on an earlier version of this manuscript. Publications Pubmed Link: Decreasing ventromedial prefrontal cortex activity during sequential risk-taking: an FMRI investigation of the balloon analog risk task. Digital Document: fnins-06-00080.pdf Study Metadata Sample Size: 16 Scanner Type: Siemens Allegra 3T Sharing License: PPDL Accession Number: ds000001</description>
<size>2526804721</size>
</item><item>
<title>Brushstroke Data of van Gogh Paintings for Research</title>
<category>Dataset</category>
<infohash>55a8925a8d546b9ca47d309ab438b91f7959e77f</infohash>
<guid>https://academictorrents.com/details/55a8925a8d546b9ca47d309ab438b91f7959e77f</guid>
<link>https://academictorrents.com/details/55a8925a8d546b9ca47d309ab438b91f7959e77f</link>
<description>The dataset was used in our painting analysis work. The data is provided for academic research comparison only. You should not redistribute the data. Related Paper: Jia Li, Lei Yao, Ella Hendriks and James Z. Wang,   Rhythmic Brushstrokes Distinguish van Gogh from His Contemporaries: Findings via Automated Brushstroke Extraction,   IEEE Transactions on Pattern Analysis and Machine Intelligence, DOI: 10.1109/TPAMI.2011.203 online, 2011, print version to appear in 2012.</description>
<size>47622171</size>
</item><item>
<title>Object and Concept Recognition for Content-Based Image Retrieval (CBIR)</title>
<category>Dataset</category>
<infohash>d5d80c1ad9d6b44b6e80c942414f1753bf9a1970</infohash>
<guid>https://academictorrents.com/details/d5d80c1ad9d6b44b6e80c942414f1753bf9a1970</guid>
<link>https://academictorrents.com/details/d5d80c1ad9d6b44b6e80c942414f1753bf9a1970</link>
<description>Our groundtruth database consists of 21 datasets of outdoor scene images, many including a text file containing a list of visible objects for each image. Project Summary With the advent of powerful but inexpensive computers and storage devices and with the availability of the World Wide Web, image databases have moved from research to reality. Search engines for finding images are available from commercial concerns and from research institutes. These search engines can retrieve images by keywords or by image content such as color, texture, and simple shape properties. Content-based image retrieval is not yet a commercial success, because most real users searching for images want to specify the semantic class of the scene or the object(s) it should contain. The large commercial image providers are still using human indexers to select keywords for their images, even though their databases contain thousands or, in some cases, millions of images. Automatic object recognition is needed, but most successful computer vision object recognition systems can only handle particular objects, such as industrial parts, that can be represented by precise geometric models. Content-based retrieval requires the recognition of generic classes of objects and concepts. A limited amount of work has been done in this respect, but no general methodology has yet emerged. The goal of this research is to develop the necessary methodology for automated recognition of generic object and concept classes in digital images. The work will build on existing object-recognition techniques in computer vision for low-level feature extraction and will design higher-level relationship and cluster features and a new unified recognition methodology to handle the difficult problem of recognizing classes of objects, instead of particular instances. Local feature representations and global summaries that can be used by general-purpose classifiers will be developed. A powerful new hierarchical multiple classifier methodology will provide the learning mechanism for automating the development of recognizers for additional objects and concepts. The resulting techniques will be evaluated on several different large image databases, including commercial databases whose images are grouped into broad classes and a ground-truth database that provides a list of the objects in each image. The results of this work will be a new generic object recognition paradigm that can immediately be applied to automated or semi-automated indexing of large image databases and will be a step forward in object recognition. Project Impact The results of this project will have an impact on both image retrieval from large databases and object recognition in general. It will target the recognition of classes of common objects that can appear in image databases of outdoor scenes. It will develop object class recognizers and a new learning formalism for automating the production of new classifiers for new classes of objects. It will also develop new representations for the image features that can be used to recognize these objects. It will allow content-based retrieval to become an important method for accessing real, commercial image databases, which today use only human index terms for retrieval. Goals, Objectives and Targeted Activities In the first year of the grant, we developed the feature extraction routines to extract features capable of recognizing an initial set of common objects representing a variety of the types of objects that appear in outdoor scenes, including city scenes and noncity scenes. We designed generic object recognition algorithms for the initial object set. We have developed such algorithms for vehicles, boats, and buildings, and have designed new high-level image features including symmetry features and cluster features. In the second year, We designed a unified representation for the image features called abstract regions. These are regions of the image that can come about from many different processes: color clustering, texture clustering, line-segment clustering, symmetry detection, and so on. All abstract regions will have a common set of features, while each different category will have its own special features. Our current emphasis is on using abstract features along with learning methodologies to recognize comon objects. Area Background The area of content-based image retrieval is a hybrid research area that requires knowledge of both computer vision and of database systems. Large image databases are being collected, and images from these collections made available to users in advertising, marketing, entertainment, and other areas where images can be used to enhance the product. These images are generally organized loosely by category, such as animals, natural scenes, people, and so on. All image indexing is done by human indexers who list the important objects in an image and other terms by which users may wish to access it. This method is not suitable for today s very large image databases. Content-based retrieval systems utilize measures that are based on low-level attributes of the image itself, including color histograms, color composition, and texture. State-of-the-art research focuses on more powerful measures that can find regions of an image corresponding to known objects that users wish to retrieve. There has been some success in finding human faces of different selected sizes, human bodies, horses, zebras and other texture animals with known patterns, and such backgrounds as jungles, water, and sky. Our research will focus on a unified methodology for feature representation and object class recognition. This work will lead to automatic indexing capabilities in the future.</description>
<size>387885021</size>
</item><item>
<title>Mnih Massachusetts Roads Dataset</title>
<category>Dataset</category>
<infohash>3b17f08ed5027ea24db04f460b7894d913f86c21</infohash>
<guid>https://academictorrents.com/details/3b17f08ed5027ea24db04f460b7894d913f86c21</guid>
<link>https://academictorrents.com/details/3b17f08ed5027ea24db04f460b7894d913f86c21</link>
<description>"The datasets introduced in Chapter 6 of my PhD thesis are below. See the thesis for more details."</description>
<size>10619540481</size>
</item><item>
<title>Mnih Massachusetts Building Dataset</title>
<category>Dataset</category>
<infohash>630d2c7e265af1d957cbee270f4328c54ccef333</infohash>
<guid>https://academictorrents.com/details/630d2c7e265af1d957cbee270f4328c54ccef333</guid>
<link>https://academictorrents.com/details/630d2c7e265af1d957cbee270f4328c54ccef333</link>
<description>"The datasets introduced in Chapter 6 of my PhD thesis are below. See the thesis for more details."</description>
<size>2074077884</size>
</item><item>
<title>New York City Taxi Trip Data 2013</title>
<category>Dataset</category>
<infohash>6c594866904494b06aae51ad97ec7f985059b135</infohash>
<guid>https://academictorrents.com/details/6c594866904494b06aae51ad97ec7f985059b135</guid>
<link>https://academictorrents.com/details/6c594866904494b06aae51ad97ec7f985059b135</link>
<description>There are two folders of data, Faredata_2013 and Tripdata_2013.  Each folder contains chunks of data in csv format, ranging from ~1.5 to ~2.5 GB in size. Fare data looks like this, showing medallion, hack_license, vendor_id, pickup date/time, payment type, fare, tip amount (look at all those zeros!), tolls, and total. Trip data (the good stuff!) looks like this.  Each file has about 14 million rows, and each row contains medallion, hack license, vendor id, rate code, store and forward flag, pickup date/time dropoff date/time, passenger count, trip time in seconds, trip distance, and latitude/longitude coordinates for the pickup and dropoff locations.  The possibilities are endless!  I smell a tip analysis coming on!</description>
<size>11849073510</size>
</item><item>
<title>New York City Taxi Fare Data 2013</title>
<category>Dataset</category>
<infohash>107a7d997f331ef4820cf5f7f654516e1704dccf</infohash>
<guid>https://academictorrents.com/details/107a7d997f331ef4820cf5f7f654516e1704dccf</guid>
<link>https://academictorrents.com/details/107a7d997f331ef4820cf5f7f654516e1704dccf</link>
<description>There are two folders of data, Faredata_2013 and Tripdata_2013.  Each folder contains chunks of data in csv format, ranging from ~1.5 to ~2.5 GB in size. Fare data looks like this, showing medallion, hack_license, vendor_id, pickup date/time, payment type, fare, tip amount (look at all those zeros!), tolls, and total. Trip data (the good stuff!) looks like this.  Each file has about 14 million rows, and each row contains medallion, hack license, vendor id, rate code, store and forward flag, pickup date/time dropoff date/time, passenger count, trip time in seconds, trip distance, and latitude/longitude coordinates for the pickup and dropoff locations.  The possibilities are endless!  I smell a tip analysis coming on!</description>
<size>8303600887</size>
</item><item>
<title>A high resolution 7-Tesla resting-state fMRI test-retest dataset with cognitive and physiological measures</title>
<category>Dataset</category>
<infohash>5fc2f273123336ee34b9ea635ef8440377a42888</infohash>
<guid>https://academictorrents.com/details/5fc2f273123336ee34b9ea635ef8440377a42888</guid>
<link>https://academictorrents.com/details/5fc2f273123336ee34b9ea635ef8440377a42888</link>
<description>Here we present a test-retest dataset of functional magnetic resonance imaging (fMRI) data acquired at rest. 22 participants were scanned during two sessions spaced one week apart. Each session includes two 1.5 mm isotropic whole-brain scans and one 0.75 mm isotropic scan of the prefrontal cortex, giving a total of six timepoints. Additionally, the dataset includes measures of mood, sustained attention, blood pressure, respiration, pulse, and the content of self-generated thoughts (mind wandering). This data enables the investigation of sources of both intra- and inter-session variability not only limited to physiological changes, but also including alterations in cognitive and affective states, at high spatial resolution. The dataset is accompanied by a detailed experimental protocol and source code of all stimuli used. Please subscribe to http://groups.google.com/group/7t_trt for important annoucements. More information at http://dx.doi.org/10.1101/008706.</description>
<size>93511442267</size>
</item><item>
<title>CONSENT! A study of consent violations in de Dutch BDSM-scene</title>
<category>Paper</category>
<infohash>c149e56c143576cb99965489b41bafc2e51933ed</infohash>
<guid>https://academictorrents.com/details/c149e56c143576cb99965489b41bafc2e51933ed</guid>
<link>https://academictorrents.com/details/c149e56c143576cb99965489b41bafc2e51933ed</link>
<description>In the Dutch scene, pre-negotiated limits and safeword are ignored on a regular basis. Likewise, many kinksters have experienced scens that. With hindsight, went too far. This is not always considered bad, and it Â´s certainly not always experienced as abuse. Consent is a less absolute given as usually assumed. Condent is the norm, but not always actual practice. A substantial part of the consent violations happens at parties. The idea that parties are safe places for a first scene should be revised at least a little. Kinksters often doubt consent in scenes by other people. Some of those who doubt take action, some donÂ´t. Yet those who donÂ´t often do so after discussing the situation with others or a DM. There is no evidence for a massive bystander effect. A small minority has ever felt the need to use a party safeword. However, this is not the case for all victims of consent violations at parties. Although a party safeword could contrubute to preventin consent violations, ist sure is no cure all. KINKYMINDS</description>
<size>415795</size>
</item><item>
<title>Internet Census 2012</title>
<category>Dataset</category>
<infohash>7e138693170629fa7835d52798be18ab2fb847fe</infohash>
<guid>https://academictorrents.com/details/7e138693170629fa7835d52798be18ab2fb847fe</guid>
<link>https://academictorrents.com/details/7e138693170629fa7835d52798be18ab2fb847fe</link>
<description>All data collected during the Internet Census 2012 is available for download via BitTorrent. It is released into public domain so everybody can use it for any purpose. For an explanation of what this data is and how it was obtained, see Paper. The full download is 568GB large. The data is segmented and organized into folders and subfolders, so you may just choose the files you need and don t have to download everything. The data is tab separated, ordered by IP and timestamp. The torrent also contains an offline version of this website and tab separated lists of the data which can be browsed in the service probe overview section, in the Hilbert Browser and in the reverse DNS overview. The data is compressed using ZPAQ 1.10, which is default in Debian and Ubuntu. It was found to have the smallest filesize, although it comes at the cost of very high cpu usage. Python code to distribute decompression workload across LAN computers is part of the code pack. Decompressing all data results in 9TB of raw logfiles, but this code can also be used to recompress the data into gzip files. The gziped dataset should be ~1.5TB.</description>
<size>611420520537</size>
</item><item>
<title>Lerman Digg 2009 Dataset</title>
<category>Dataset</category>
<infohash>d98540da6d34fb6a0150fd88b41580a377cb454d</infohash>
<guid>https://academictorrents.com/details/d98540da6d34fb6a0150fd88b41580a377cb454d</guid>
<link>https://academictorrents.com/details/d98540da6d34fb6a0150fd88b41580a377cb454d</link>
<description>Digg2009 data set contains data about stories promoted to Digg s front page over a period of a month in 2009. For each story, we collected the list of all Digg users who have voted for the story up to the time of data collection, and the time stamp of each vote. We also retrieved the voters  friendship links. The semantics of the friendship links are as follows user_id &amp;mdash;&gt; friend_id means that user_id is watching the activities of (is a fan of) friend_id. User ids have been anonymized, but are unique in the data set: a user with a specific id in the friendship links table and a user with the same id in the votes table correspond to the same actual user. The data is in zipped csv files that are password protected. The password is digg2009_user. ##Votes Table digg_votes contains 3,018,197 votes on 3553 popular stories made by 139,409 distinct users. The first vote is from the story s submitter. ##Schema of the table |Attribute|Value| |-|-| |vote_date: |Unix time stamp of the vote| |voter_id: |anonymized unique id of the voter| |story_id: |anonymized unique id of the story| ![](http://www.isi.edu/~lerman/downloads/diggs_distribution.jpg) ![](http://www.isi.edu/~lerman/downloads/voting_distribution.png) (left) Distribution of votes (diggs) per story. An outlier with more than 24,000 votes is not shown. (right)Distribution of the number of votes (diggs) made by users. ##Friendship links Table digg_friends contains 1,731,658 friendship links of 71,367 distinct users. Voters who do not appear in the table did not specify any friends at the time data was collected. Schema of the digg_friends table |Attribute|Value| |-|-| |mutual: |indicated whether the link represents a mutual friend relation (1) or not (0)| |friend_date: |Unix time stamp of when the friendship link was created| |user_id: |anonymized unique id of a user| |friend_id: |anonymized unique id of a user| ![](http://www.isi.edu/~lerman/downloads/fans_distribution.png) Distribution of the number of fans per user. Empirical characterization of this data is described in Lerman, K., and Ghosh, R. (2010) "Information Contagion: an Empirical Study of Spread of News on Digg and Twitter Social Networks." In Proceedings of 4th International Conference on Weblogs and Social Media (ICWSM). (presentation) (bibtex) This data is made available to the community for research purposes only. If you use the data in a publication, please cite the above paper.</description>
<size>37554853</size>
</item><item>
<title>Lerman Twitter 2010 Dataset</title>
<category>Dataset</category>
<infohash>d8b3a315172c8d804528762f37fa67db14577cdb</infohash>
<guid>https://academictorrents.com/details/d8b3a315172c8d804528762f37fa67db14577cdb</guid>
<link>https://academictorrents.com/details/d8b3a315172c8d804528762f37fa67db14577cdb</link>
<description>Twitter_2010 data set contains tweets containing URLs that have been posted on Twitter during October 2010. In addition to tweets, we also the followee links of tweeting users, allowing us to reconstruct the follower graph of active (tweeting) users. URLs	66,059 tweets	2,859,764 users	736,930 links	36,743,448 Tweets Table (in csv format) link_status_search_with_ordering_real_csv contains tweets with the following information link: URL within the text of the tweet id: tweet id create_at: date added to the db create_at_long inreplyto_screen_name: screen name of user this tweet is replying to inreplyto_user_id: user id of user this tweet is replying to source: device from which the tweet originated bad_user_id: alternate user id user_screen_name: tweeting user screen name order_of_users: tweet s index within sequence of tweets of the same URL user_id: user id Table (in csv format) distinct_users_from_search_table_real_map contains names of tweeting users, and the following information for each user: user_id: user id user_screen_name: user name indegree: number of followers outdegree: number of friends/followees bad_user_id: alternate user id Follower graph File active_follower_real_sql contains zipped SQL dump of links between tweeting users in the form: user_id: user id follower_id: user id of the follower Empirical characterization of this data is described in Kristina Lerman, Rumi Ghosh, Tawan Surachawala (2012) "Social Contagion: An Empirical Study of Information Spread on Digg and Twitter Follower Graphs." This data is made available to the community for research purposes only. If you use the data in a publication, please cite the above paper.</description>
<size>292173969</size>
</item><item>
<title>Next-generation sequencing course videos</title>
<category>Course</category>
<infohash>d649d0f57856bf3da174b33d71f08d217b110830</infohash>
<guid>https://academictorrents.com/details/d649d0f57856bf3da174b33d71f08d217b110830</guid>
<link>https://academictorrents.com/details/d649d0f57856bf3da174b33d71f08d217b110830</link>
<description>These are records of Next-generation sequencing course which took place at The Faculty of Bioengineering and Bioinformatics, Lomonosov Moscow State University, Moscow in 2014.</description>
<size>7305908643</size>
</item><item>
<title>US domestic flights from 1990 to 2009</title>
<category>Dataset</category>
<infohash>a2ccf94bbb4af222bf8e69dad60a68a29f310d9a</infohash>
<guid>https://academictorrents.com/details/a2ccf94bbb4af222bf8e69dad60a68a29f310d9a</guid>
<link>https://academictorrents.com/details/a2ccf94bbb4af222bf8e69dad60a68a29f310d9a</link>
<description>Over 3.5 million monthly domestic flight records from 1990 to 2009. Data are arranged as an adjacency list with metadata. Ready for immediate database import and analysis. ##Fields: |Short name|	Type	 |Description| |-|-|-| |Origin	|String	|Three letter airport code of the origin airport| |Destination	|String|	Three letter airport code of the destination airport| |Origin City	|String|	Origin city name| |Destination City	|String	|Destination city name| |Passengers	|Integer|	Number of passengers transported from origin to destination| |Seats	|Integer|	Number of seats available on flights from origin to destination| |Flights	|Integer	|Number of flights between origin and destination (multiple records for one month, many with flights &gt; 1)| |Distance	|Integer|	Distance (to nearest mile) flown between origin and destination| |Fly Date	|Integer|	The date (yyyymm) of flight| |Origin Population|	Integer|	Origin city s population as reported by US Census| |Destination Population|	Integer|	Destination city s population as reported by US Census| ##Snippet: MFR	RDM	Medford, OR	Bend, OR	0	0	1	156	200810	200298	157730 AMA	EKO	Amarillo, TX	Elko, NV	124	124	1	858	199308	202960	40259 TUS	EKO	Tucson, AZ	Elko, NV	112	124	1	658	199308	711392	40259 AMA	EKO	Amarillo, TX	Elko, NV	115	124	1	858	199406	206315	41668 ICT	EKO	Wichita, KS	Elko, NV	100	124	1	1007	199607	552884	45034 SPS	EKO	Wichita Falls, TX	Elko, NV	122	124	1	1059	199603	147683	45034 ##Source(s) 1. US Census Bureau 2. RITA/Transtats, Bureau of Transportation Statistics</description>
<size>35404022</size>
</item><item>
<title>CrackStation's Password Cracking Dictionary (Human Passwords Only)</title>
<category>Dataset</category>
<infohash>7ae809ccd7f0778328ab4b357e777040248b8c7f</infohash>
<guid>https://academictorrents.com/details/7ae809ccd7f0778328ab4b357e777040248b8c7f</guid>
<link>https://academictorrents.com/details/7ae809ccd7f0778328ab4b357e777040248b8c7f</link>
<description>The list contains every wordlist, dictionary, and password database leak that I could find on the internet (and I spent a LOT of time looking). It also contains every word in the Wikipedia databases (pages-articles, retrieved 2010, all languages) as well as lots of books from Project Gutenberg. It also includes the passwords from some low-profile database breaches that were being sold in the underground years ago. The format of the list is a standard text file sorted in non-case-sensitive alphabetical order. Lines are separated with a newline "\n" character. You can test the list without downloading it by giving SHA256 hashes to the free hash cracker or to @PlzCrack on twitter. Here s a tool for computing hashes easily. Here are the results of cracking LinkedIn s and eHarmony s password hash leaks with the list. The list is responsible for cracking about 30% of all hashes given to CrackStation s free hash cracker, but that figure should be taken with a grain of salt because some people try hashes of really weak passwords just to test the service, and others try to crack their hashes with other online hash crackers before finding CrackStation. Using the list, we were able to crack 49.98% of one customer s set of 373,000 human password hashes to motivate their move to a better salting scheme.</description>
<size>257973006</size>
</item><item>
<title>Wikipedia English Official Offline Edition 2014-07-07</title>
<category>Dataset</category>
<infohash>e18b8cce7d9cb2726f5f40dcb857111ec573cad4</infohash>
<guid>https://academictorrents.com/details/e18b8cce7d9cb2726f5f40dcb857111ec573cad4</guid>
<link>https://academictorrents.com/details/e18b8cce7d9cb2726f5f40dcb857111ec573cad4</link>
<description>Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.</description>
<size>11031162019</size>
</item><item>
<title>Invent Your Own Computer Games with Python, 2nd Edition</title>
<category>Paper</category>
<infohash>646222e4236e52b96a18fe7ec068389d5d75c8fd</infohash>
<guid>https://academictorrents.com/details/646222e4236e52b96a18fe7ec068389d5d75c8fd</guid>
<link>https://academictorrents.com/details/646222e4236e52b96a18fe7ec068389d5d75c8fd</link>
<description>Invent Your Own Computer Games with Python is a free book (as in, open source) and a free eBook (as in, no cost to download) that teaches you how to program in the Python programming language. Each chapter gives you the complete source code for a new game, and then teaches the programming concepts from the example. This second edition has revised and expanded content, including a Pygame tutorial to make games with graphics, animation, and sound.</description>
<size>5070335</size>
</item><item>
<title>Greening IT : how a greening IT can form a solid base for a low-carbon society</title>
<category>Paper</category>
<infohash>a5a155abf1b83362c117af83ee79d71c5e1c5ba7</infohash>
<guid>https://academictorrents.com/details/a5a155abf1b83362c117af83ee79d71c5e1c5ba7</guid>
<link>https://academictorrents.com/details/a5a155abf1b83362c117af83ee79d71c5e1c5ba7</link>
<description>Information Technology is responsible for approximately 2% of the world s emission of greenhouse gases. The IT sector itself contributes to these greenhouse gas emissions, through its massive consumption of energy - and therefore continuously exacerbates the problem. At the same time, however, the IT industry can provide the technological solutions we need to optimise resource use, save energy and reduce greenhouse gas emissions. We call this Greening IT. This book looks into the great potential of greening society with IT - i.e. the potential of IT in transforming our societies into Low-Carbon societies. The book is the result of an internationally collaborative effort by a number of opinion leaders in the field of Greening IT.</description>
<size>1538645</size>
</item><item>
<title>Ikhana: Unmanned Aircraft System, Western States Fire Missions</title>
<category>Paper</category>
<infohash>c1a08318614481f1abd3edfef3df15a853565f1b</infohash>
<guid>https://academictorrents.com/details/c1a08318614481f1abd3edfef3df15a853565f1b</guid>
<link>https://academictorrents.com/details/c1a08318614481f1abd3edfef3df15a853565f1b</link>
<description>The story of the Ikhana, a remotely piloted vehicle used by NASA researchers to conduct Earth science research and which became an unexpected flying and imaging helper to emergency workers battling California wildfires.</description>
<size>5322549</size>
</item><item>
<title>Modeling Flight</title>
<category>Paper</category>
<infohash>3db14fe4709e69b49826c6b5ae43a8f0416a4b40</infohash>
<guid>https://academictorrents.com/details/3db14fe4709e69b49826c6b5ae43a8f0416a4b40</guid>
<link>https://academictorrents.com/details/3db14fe4709e69b49826c6b5ae43a8f0416a4b40</link>
<description>For years, NASA has used subscale models of aircraft to test how they would perform at full size. In fact, since the 1920s during the days of the National Advisory Committee on Aeronautics, scientists have continually refined testing techniques including building and using new facilities, making models more sophisticated and learning how to best interpret the results. Using these techniques, NASA has made many contributions to a broad range of aircraft including general aviation, fighters, civil transports, lifting bodies, reentry capsules, parawing vehicles, and supersonic transports. This book describes the issues that must be considered when transferring subscale results to full-scale application, and reviews results obtained in historically significant aircraft programs conducted at NASA s Langley Research Center, NASA s Dryden Flight Research Center, and NASA s Ames Research Center.</description>
<size>6334851</size>
</item><item>
<title>Breaking the Mishap Chain: Human Factors Lessons Learned from Aerospace Accidents and Incidents in Research, Flight Test, and Development</title>
<category>Paper</category>
<infohash>1c440ef936fcae142f1f80b3f519723dcd8808a0</infohash>
<guid>https://academictorrents.com/details/1c440ef936fcae142f1f80b3f519723dcd8808a0</guid>
<link>https://academictorrents.com/details/1c440ef936fcae142f1f80b3f519723dcd8808a0</link>
<description>This volume contains a collection of case studies of mishaps involving experimental aircraft, aerospace vehicles, and spacecraft in which human factors played a significant role. In all cases the engineers involved, the leaders and managers, and the operators (i.e., pilots and astronauts) were supremely qualified and by all accounts superior performers. Such accidents and incidents rarely resulted from a single cause but were the outcome of a chain of events in which altering at least one element might have prevented disaster. As such, this work is most certainly not an anthology of blame. It is offered as a learning tool so that future organizations, programs, and projects may not be destined to repeat the mistakes of the past. These lessons were learned at high material and personal costs and should not be lost to the pages of history.</description>
<size>22628045</size>
</item><item>
<title>NASA's Contributions to Aeronautics, Volume 1</title>
<category>Paper</category>
<infohash>cb81120d7fdaf03e1544276352d938039e83eb1c</infohash>
<guid>https://academictorrents.com/details/cb81120d7fdaf03e1544276352d938039e83eb1c</guid>
<link>https://academictorrents.com/details/cb81120d7fdaf03e1544276352d938039e83eb1c</link>
<description>Since its creation, NASA has steadily advanced flight within the atmosphere, repeatedly influencing aviation s evolution by extending the rich legacy of its predecessor, the National Advisory Committee for Aeronautics, or NACA. This first volume in a two-volume set includes case studies and essays on NACA-NASA research for contributions such as high-speed wing design, the area rule, rotary-wing aerodynamics research, sonic boom mitigation, hypersonic design, computational fluid dynamics, electronic flight control and environmentally friendly aircraft technology.</description>
<size>10557978</size>
</item><item>
<title>NASA's Contributions to Aeronautics, Volume 2</title>
<category>Paper</category>
<infohash>2bc814f15c61385a3aad423f54ec2c130a2f0b95</infohash>
<guid>https://academictorrents.com/details/2bc814f15c61385a3aad423f54ec2c130a2f0b95</guid>
<link>https://academictorrents.com/details/2bc814f15c61385a3aad423f54ec2c130a2f0b95</link>
<description>The second volume includes case studies and essays on NACA-NASA research for contributions including wind shear and lightning research, flight operations, human factors, wind tunnels, composite structures, general aviation aircraft safety, supersonic cruise aircraft research and atmospheric icing.</description>
<size>12445490</size>
</item><item>
<title>X-15: Extending the Frontiers of Flight</title>
<category>Paper</category>
<infohash>5c2f45f0ca9c2a8140828e59901f8a02466d07ec</infohash>
<guid>https://academictorrents.com/details/5c2f45f0ca9c2a8140828e59901f8a02466d07ec</guid>
<link>https://academictorrents.com/details/5c2f45f0ca9c2a8140828e59901f8a02466d07ec</link>
<description>The X-15 was the ultimate X vehicle. Built in the 1950s, she became the fastest and highest-flying winged aircraft of its time. During 199 flights from 1959 through 1968, she collected data about hypersonic flight that was invaluable to aeronautics and to developers of the space shuttle. This e-book describes the genesis of the program, the design and construction of the aircraft, years of research flights and the experiments that flew aboard them.</description>
<size>29475571</size>
</item><item>
<title>The Apollo of Aeronautics: NASA's Aircraft Energy Efficiency Program</title>
<category>Paper</category>
<infohash>eb87997b2e7c4eacc15a8e36823a6c63eaec749e</infohash>
<guid>https://academictorrents.com/details/eb87997b2e7c4eacc15a8e36823a6c63eaec749e</guid>
<link>https://academictorrents.com/details/eb87997b2e7c4eacc15a8e36823a6c63eaec749e</link>
<description>The fuel crisis of the 1970s threatened not only the airline industry but also the future of American prosperity itself. It also served as the genesis of technological ingenuity and innovation from a group of scientists and engineers at NASA, who initiated planning exercises to explore new fuel-saving technologies. What emerged was a series of technologically daring aeronautical programs with the potential to reduce by an astonishing 50 percent the amount of fuel used by the nation s commercial and military aircraft.</description>
<size>4848915</size>
</item><item>
<title>Coming Home: Reentry and Recovery from Space</title>
<category>Paper</category>
<infohash>6c1c31544303cff81ac731f0e763dae2dcf5b302</infohash>
<guid>https://academictorrents.com/details/6c1c31544303cff81ac731f0e763dae2dcf5b302</guid>
<link>https://academictorrents.com/details/6c1c31544303cff81ac731f0e763dae2dcf5b302</link>
<description>The technologies for the reentry and recovery from space might change over time, but the challenge remains one of the most important and vexing in the rigorous efforts to bring spacecraft and their crews and cargo home successfully. Returning to Earth after a flight into space is a fundamental challenge, and contributions from the NASA Aeronautics Research Mission Directorate in aerodynamics, thermal protection, guidance and control, stability, propulsion, and landing systems have proven critical to the success of the human space flight and other space programs. Without this base of fundamental and applied research, the capability to fly into space would not exist. This book relates in a chronological manner the way in which NASA has approached the challenge of reentering the atmosphere after a space mission and the technologies associated with safely dealing with the friction of this encounter and the methods used for landing safely on Earth. This history seeks to tell this complex story in a compelling, sophisticated, and technically sound manner for an audience that understands little about the evolution of flight technology. Bits and pieces of this history exist in other publications, but often overlooked is the critical role these concepts played in making a safe return to Earth possible. Moreover, the challenges, mysteries, and outcomes that these programs’ members wrestled with offer object lessons in how earlier generations of engineers sought optimal solutions and made tradeoffs.</description>
<size>7291808</size>
</item><item>
<title>Dressing for Altitude, U.S. Aviation Pressure Suits-Wiley Post to Space Shuttle</title>
<category>Paper</category>
<infohash>8b0a7f2e032d2f2ce186bb3a9c4e958c1bbbb9cf</infohash>
<guid>https://academictorrents.com/details/8b0a7f2e032d2f2ce186bb3a9c4e958c1bbbb9cf</guid>
<link>https://academictorrents.com/details/8b0a7f2e032d2f2ce186bb3a9c4e958c1bbbb9cf</link>
<description>Anybody who has watched many movies or television shows has seen them-the ubiquitous silver suits worn by pilots as they explore the unknown. They are called pressure suits, and one can trace their lineage to Wiley Post or, perhaps, a bit earlier. There are two kinds of pressure suits: partial pressure and full pressure. David Clark, the man, once pointed out that these were not very good names, but they are the ones that stuck. In a partial-pressure suit, the counter-pressure is not as complete as in a full-pressure suit, but it is placed so that shifts in body fluids are kept within reasonable limits. On the other hand, a full-pressure suit, which is an anthropomorphic pressure vessel, creates an artificial environment for the pilot. One type of pressure suit is not necessarily better than the other, and both partial pressure and full pressure suits are still in limited use around the world. Both type of suits have benefits and limitations and, by and large, pilots dislike both, even while acknowledging their necessity. For the past 60 years, they have been an indispensible part of a small fragment of the aviation world. Although space suits, which differ from pressure suits in subtle, but important ways, have been well covered in literature, pressure suits have gone unheralded except as introductions to the space suit histories. This e-book is an attempt to correct that, and covers pressure suits from the beginning through the end of the Space Shuttle Program.</description>
<size>18709888</size>
</item><item>
<title>Crash Course: Lessons Learned from Accidents Involving Remotely Piloted and Autonomous Aircraft</title>
<category>Paper</category>
<infohash>78f5a524db5e41146963f0da6594d44cb8e99a00</infohash>
<guid>https://academictorrents.com/details/78f5a524db5e41146963f0da6594d44cb8e99a00</guid>
<link>https://academictorrents.com/details/78f5a524db5e41146963f0da6594d44cb8e99a00</link>
<description>This volume contains an investigation of remotely piloted research vehicle (RPRV) and unmanned aircraft system (UAS) mishaps and will examine their causes, consequences, resultant corrective actions, and lessons learned. Most undesired outcomes usually do not occur because of a single event, but rather from a series of events and actions involving equipment malfunctions and/or human factors. This book comprises a series of case studies focusing mostly on accidents and incidents involving experimental aircraft. The information provided should be of use to flight-test organizations, aircraft operators, educators, and students, among others. These lessons are not unique to the UAS environment and are also applicable to human aviation and space flight activities. Common elements include crew resource management, training, mission planning issues, management and programmatic pressures (e.g., schedule, budget, resources), cockpit/control station design, and other factors.</description>
<size>2838778</size>
</item><item>
<title>Quieting the Boom: The Shaped Sonic Boom Demonstrator and the Quest for Quiet Supersonic Flight</title>
<category>Paper</category>
<infohash>68bdd29343e8f2a157be35d78a87c9249ae218ef</infohash>
<guid>https://academictorrents.com/details/68bdd29343e8f2a157be35d78a87c9249ae218ef</guid>
<link>https://academictorrents.com/details/68bdd29343e8f2a157be35d78a87c9249ae218ef</link>
<description>On a hot and humid July day in 2003, a pair of small supersonic jet airplanes took off together from Cecil Field, a former naval air station on the eastern edge of Jacksonville, FL. Even though the Northrop Corporation had built both planes based on a common design, it was hard at first glance to tell that the two aircraft flying side by side were so closely related. One was a sleek T-38 Talon, a two-seat aircraft that has served as the U.S. Air Force’s (USAF’s) advanced trainer since the early 1960s. The other was originally an F-5E Tiger II, one of more than 2,000 Northrop F-5s that had equipped air forces around the world with a low-cost, high-performance combat and reconnaissance aircraft. Because of the F-5E’s agility and compact size, the U.S. military adopted it as an aggressor aircraft to hone the skills of its own fighter pilots. Both planes attested to the competence of Northrop’s design teams. Of all of the many supersonic jets developed for the Air Force and U.S. Navy in the 1950s, the T-38 and F-5 are the only ones still in general use. Although on loan from the Navy’s aggressor training squadron, this particular F-5E no longer looked much like a fighter jet. With what appeared to be a pouch hanging under its chin, the aircraft somewhat resembled an overgrown pelican. In addition to lettering identifying Northrop Grumman Integrated Systems, its white fuselage was decorated with sharply angled blue and red pinstripes along with emblems containing the acronyms “NASA” and “DARPA” while its tail bore an oval logo with the letters QSP. At each of these stops, the planes attracted the attention of flight-line personnel and others nearby, most of whom could recognize the strange white jet as some kind of F-5. But many of them still had questions. What’s with the big nose? Why is Boeing helping a Northrop Grumman pilot fly across the country? What do those jagged red and blue stripes signify? And why all the various logos? Quieting the Boom: The Shaped Sonic Boom Demonstrator and the Quest for Quiet Supersonic Flight is the story of this plane, as well as a general history of sonic boom research, emphasizing the people and organizations. The Shaped Sonic Boom Demonstrator culminated four decades of study and research on mitigating the strength of sonic booms. This book follows up on a case study from 2009, “Softening the Sonic Boom: 50 Years of NASA Research.” That relatively short survey was published in volume I of NASA’s Contributions to Aeronautics (NASA SP-2010-570, available at http://www.nasa.gov/connect/ebooks/aero_contributions1_detail.html)</description>
<size>7705788</size>
</item><item>
<title>Thinking Obliquely: Robert T. Jones, the Oblique Wing, NASA's AD-1 Demonstrator, and its Legacy</title>
<category>Paper</category>
<infohash>695a01d55152783501f6337343331ff2a832eb8c</infohash>
<guid>https://academictorrents.com/details/695a01d55152783501f6337343331ff2a832eb8c</guid>
<link>https://academictorrents.com/details/695a01d55152783501f6337343331ff2a832eb8c</link>
<description>On December 21, 1979, the National Aeronautics and Space Administration (NASA) AD-1 Oblique Wing Research Aircraft (OWRA) took off from the main runway at Edwards Air Force Base (AFB), CA, for a 45-minute checkout flight. It marked the world’s first flight of a piloted oblique-wing airplane. This historic flight, which was flown with the airplane’s wing at its “straight” (0-degree angle) position, was soon followed by flights at wing angles of 15 degrees, 20 degrees, 45 degrees, and finally on April 24, 1981, at the 60-degree-angle design goal, thus proving the aerodynamic concept of an airplane with an oblique-wing configuration. This initial oblique-wing program, which ran from 1976 through 1982, was a joint effort between NASA’s Ames Research Center and Dryden Flight Research Center, CA, thus giving rise to the aircraft’s name: Ames-Dryden AD-1 Oblique Wing Research Aircraft. Chapter 1 reviews the life of NASA aerodynamicist Robert T. Jones and his path to the oblique wing. Chapter 2 covers the extensive wind tunnel, model, computer-code, and simulation testing, first at Langley and later at Ames, as well as a number of NASA industry design contracts undertaken by Boeing and Lockheed. Chapter 3 reviews the design and fabrication of the AD-1 Oblique Wing Research Aircraft and its subsequent proposed use as a joined-wing demonstrator. Chapter 4 describes the flight testing and flight evaluation of the AD-1. Chapter 5 reviews the supersonic F-8 followup oblique-wing program. And, finally, chapter 6 reviews the subsequent oblique-wing plans and proposals. Appendices present the physical characteristics of the AD-1 aircraft, a detailed description of it, and a summary flight log of its flight research program.</description>
<size>3444391</size>
</item><item>
<title>Sweeping Forward: Developing &amp; Flight Testing the Grumman X-29A Forward Swept Wing Research Aircraft</title>
<category>Paper</category>
<infohash>c003467de1b826a2294c8ecd668df65e916ce613</infohash>
<guid>https://academictorrents.com/details/c003467de1b826a2294c8ecd668df65e916ce613</guid>
<link>https://academictorrents.com/details/c003467de1b826a2294c8ecd668df65e916ce613</link>
<description>Aircraft design, more than many other disciplines, exemplifies the phrase “form follows function.” The laws of physics demand it. Aeronautical designers have always reached forward, stretching capabilities as far as the constraints of gravity and the limits of materials would allow their genius to probe. The emerging computer flight control and composite structures revolutions of the 1970s promised designers access to a hitherto impossible dream: a forward sweptwing (FSW) fighter with enhanced maneuverability and efficiency. The benefits of forward swept wings have long been understood. Both forward and aft swept wings yield significant drag reduction in the transonic speedrange. Since air flows inboard on forward swept wings, unlike the outboard flow on traditionally swept wings, the forward swept wings’ tips remain unstalled at higher angles of attack, retaining maneuverability and controllability. The X-29 was an unusual aircraft with a truly unique silhouette. It combined many features that challenged the technologies of its day and represented special problems for the developers and the team of testers responsible for documenting its features and design goals. This book is a look at the big</description>
<size>2933514</size>
</item><item>
<title>Cave of the Winds</title>
<category>Paper</category>
<infohash>4ea82c1269afbd37aadffc8bf4398d94b216b8df</infohash>
<guid>https://academictorrents.com/details/4ea82c1269afbd37aadffc8bf4398d94b216b8df</guid>
<link>https://academictorrents.com/details/4ea82c1269afbd37aadffc8bf4398d94b216b8df</link>
<description>The huge Langley Full-Scale Tunnel building dominated the skyline of Langley Air Force Base for 81 years (1930?2011). The Full-Scale Tunnel was constructed by the National Advisory Committee for Aeronautics (NACA) during an era when biplanes and dirigibles dominated aviation. The results of critical tests conducted within its massive test section contributed to many of the Nation s most important aeronautics and space programs. The historical significance of the Full-Scale Tunnel was formally recognized when it was designated a National Historic Landmark in 1985 by the National Park Service.</description>
<size>12018532</size>
</item><item>
<title>The Relativity of Simultaneity is Wrong.txt</title>
<category>Paper</category>
<infohash>d137ffd5e951cc53cd789aab935bf8e833bf8229</infohash>
<guid>https://academictorrents.com/details/d137ffd5e951cc53cd789aab935bf8e833bf8229</guid>
<link>https://academictorrents.com/details/d137ffd5e951cc53cd789aab935bf8e833bf8229</link>
<description>The relativity of simultaneity says that an observer will conclude an event happened first if the light from that event reaches them first. For example, in Einstein?s thought experiment of the two observers, one standing on the platform, and one standing on a train moving train, Einstein says that if two lightning bolts struck the platform in at an equal distance for the observer on the platform, but not an equal distance for the observer on the train by the time for the light from the lightning bolt to reach him the observer standing on the moving train concludes the light from the lightning bolt that s farther away from him came after the one closest to him. However, that is not the case because we do not conclude that the light from stars that are far away from us came after the stars that are closer to us because we account for the time it takes for the light to reach us. So we know that even though the light from stars that are billions of light-years away from us reaches after the light from stars that are a 10 light years away from us we know that the stars that are closest to us came after the stars that are farthest away from us, even though the light from the stars that are closest to us reached us first. If the observer on the train accounts for the time it took for the light from the lightning bolts to reach him, he would not conclude that the lightning bolt that s farther away from him came after the lightning bolt that s closer to him. It?s the same idea as with accounting for the time it takes for light to reach us from the stars. In  Einstein?s thought experiment we do the same thing for the lightning bolts as we do for the stars (account for the time it takes for the light to reach us) only we?re doing it on a smaller scale (the lightning bolts are matter of feet away as opposed to the stars which are light-years away).</description>
<size>1873</size>
</item><item>
<title>The MacTeX 2014 Distribution</title>
<category>Dataset</category>
<infohash>749858dabef2d7dc27b7994cc9c3ebdb55d564ee</infohash>
<guid>https://academictorrents.com/details/749858dabef2d7dc27b7994cc9c3ebdb55d564ee</guid>
<link>https://academictorrents.com/details/749858dabef2d7dc27b7994cc9c3ebdb55d564ee</link>
<description>TeX (= tau epsilon chi, and pronounced similar to "blecch", not to the state known for  Tex-Mex  chili) is a computer language designed for use in typesetting; in particular, for typesetting math and other technical (from greek "techne" = art/craft, the stem of  technology ) material.</description>
<size>2512999211</size>
</item><item>
<title>UMN Sarwat Foursquare Dataset (September 2013)</title>
<category>Dataset</category>
<infohash>b24c73949308b3f6bdd8fea1a485534392eef338</infohash>
<guid>https://academictorrents.com/details/b24c73949308b3f6bdd8fea1a485534392eef338</guid>
<link>https://academictorrents.com/details/b24c73949308b3f6bdd8fea1a485534392eef338</link>
<description>This data set contains 2,153,471 users, 1,143,092 venues, 1,021,970 check-ins, 27,098,490 social connections, and 2,809,581 ratings that users assigned to venues; all extracted from the Foursquare application through the public API. All users information have been anonymized, i.e., users geolocations are also anonymized. Each user is represented by an id, and GeoSpatial location. The same for venues. The data are contained in five files, users.dat, venues.dat, checkins.dat, socialgraph.dat, and ratings.dat. More details about the contents and use of all these files follows. Content of Files * users.dat: consists of a set of users such that each user has a unique id and a geospatial location (latitude and longitude) that represents the user home town location. * venues.dat: consists of a set of venues (e.g., restaurants) such that each venue has a unique id and a geospatial location (lattude and longitude). * checkins.dat: marks the checkins (visits) of users at venues. Each check-in has a unique id as well as the user id and the venue id. * socialgraph.dat: contains the social graph edges (connections) that exist between users. Each social connection consits of two users (friends) represented by two unique ids (first_user_id and second_user_id). * ratings.dat: consists of implicit ratings that quantifies how much a user likes a specific venue. Credits The user must acknowledge the use of the data set in publications resulting from the use of the data set by citing the following papers: * Mohamed Sarwat, Justin J. Levandoski, Ahmed Eldawy, and Mohamed F. Mokbel. LARS*: A Scalable and Efficient Location-Aware Recommender System. in IEEE Transactions on Knowledge and Data Engineering TKDE * Justin J. Levandoski, Mohamed Sarwat, Ahmed Eldawy, and Mohamed F. Mokbel. LARS: A Location-Aware Recommender System. in ICDE 2012</description>
<size>160746527</size>
</item><item>
<title>Cat Annotation Dataset Merged</title>
<category>Dataset</category>
<infohash>c501571c29d16d7f41d159d699d0e7fb37092cbd</infohash>
<guid>https://academictorrents.com/details/c501571c29d16d7f41d159d699d0e7fb37092cbd</guid>
<link>https://academictorrents.com/details/c501571c29d16d7f41d159d699d0e7fb37092cbd</link>
<description># Cat Annotation Dataset The CAT dataset includes 10,000 cat images. For each image, we annotate the head of cat with nine points, two for eyes, one for mouth, and six for ears. The detail configuration of the annotation was shown in Figure 6 of the original paper: Weiwei Zhang, Jian Sun, and Xiaoou Tang, "Cat Head Detection - How to Effectively Exploit Shape and Texture Features", Proc. of European Conf. Computer Vision, vol. 4, pp.802-816, 2008. ### Format The annotation data are stored in a file with the name of the corresponding cat image plus ".cat", one annotation file for each cat image. For each annotation file, the annotation data are stored in the following sequence: 1.  Number of points (always 9) 2.  Left Eye 3.  Right Eye 4.  Mouth 5.  Left Ear-1 6.  Left Ear-2 7.  Left Ear-3 8.  Right Ear-1 9.  Right Ear-2 10. Right Ear-3 ### Training, Validation, and Testing We randomly divide the data into three sets: 5,000 images for training, 2,000 images for validation and 3000 images for testing. ![](https://i.imgur.com/TKEV2Ov.jpg)</description>
<size>1980831996</size>
</item><item>
<title>VX Heaven Virus Collection 2010-05-18</title>
<category>Dataset</category>
<infohash>34ebe49a48aa532deb9c0dd08a08a017aa04d810</infohash>
<guid>https://academictorrents.com/details/34ebe49a48aa532deb9c0dd08a08a017aa04d810</guid>
<link>https://academictorrents.com/details/34ebe49a48aa532deb9c0dd08a08a017aa04d810</link>
<description>collection of magazines, virus samples, virus sources, polymorphic engines, virus generators, virus writing tutorials, articles, books, news archives etc</description>
<size>47880726923</size>
</item><item>
<title>Harvard CS E75 - Building Dynamic Websites (2012)</title>
<category>Course</category>
<infohash>7510ba14592ca52719405f5fdf05879c9c84244f</infohash>
<guid>https://academictorrents.com/details/7510ba14592ca52719405f5fdf05879c9c84244f</guid>
<link>https://academictorrents.com/details/7510ba14592ca52719405f5fdf05879c9c84244f</link>
<description>Harvard CS E75 - Building Dynamic Websites (2012)</description>
<size>8016687587</size>
</item><item>
<title>CONSENT! A study of consent violations in de Dutch BDSM-scene Abstract:</title>
<category>Paper</category>
<infohash>513aa70dde94bf0bf154c0ad74568fca92f9fc17</infohash>
<guid>https://academictorrents.com/details/513aa70dde94bf0bf154c0ad74568fca92f9fc17</guid>
<link>https://academictorrents.com/details/513aa70dde94bf0bf154c0ad74568fca92f9fc17</link>
<description>In the Dutch scene, pre-negotiated limits and safeword are ignored on a regular basis. Likewise, many kinksters have experienced scens that. With hindsight, went too far. This is not always considered bad, and it ´s certainly not always experienced as abuse. Consent is a less absolute given as usually assumed. Condent is the norm, but not always actual practice. A substantial part of the consent violations happens at parties. The idea that parties are safe places for a first scene should be revised at least a little. Kinksters often doubt consent in scenes by other people. Some of those who doubt take action, some don´t. Yet those who don´t often do so after discussing the situation with others or a DM. There is no evidence for a massive bystander effect. A small minority has ever felt the need to use a party safeword. However, this is not the case for all victims of consent violations at parties. Although a party safeword could contrubute to preventin consent violations, ist sure is no cure all. KINKYMINDS</description>
<size>570159</size>
</item><item>
<title>NOAA Weather Data 2006</title>
<category>Dataset</category>
<infohash>520dfb5de9d364e3b99fa7d88c30edd41d9309f2</infohash>
<guid>https://academictorrents.com/details/520dfb5de9d364e3b99fa7d88c30edd41d9309f2</guid>
<link>https://academictorrents.com/details/520dfb5de9d364e3b99fa7d88c30edd41d9309f2</link>
<description/>
<size>3532503972</size>
</item><item>
<title>NOAA Weather Data 2005</title>
<category>Dataset</category>
<infohash>c248a928e1311a1371f8e0806ab406e84a16b289</infohash>
<guid>https://academictorrents.com/details/c248a928e1311a1371f8e0806ab406e84a16b289</guid>
<link>https://academictorrents.com/details/c248a928e1311a1371f8e0806ab406e84a16b289</link>
<description/>
<size>3413667866</size>
</item><item>
<title>NOAA Weather Data 2004</title>
<category>Dataset</category>
<infohash>b5a6d86ec233fedd303b1a2ef265c182d8b90cb8</infohash>
<guid>https://academictorrents.com/details/b5a6d86ec233fedd303b1a2ef265c182d8b90cb8</guid>
<link>https://academictorrents.com/details/b5a6d86ec233fedd303b1a2ef265c182d8b90cb8</link>
<description/>
<size>2890018599</size>
</item><item>
<title>NOAA Weather Data 2003</title>
<category>Dataset</category>
<infohash>05bcdffe07ae0021f564e7ee06e7609d44f73510</infohash>
<guid>https://academictorrents.com/details/05bcdffe07ae0021f564e7ee06e7609d44f73510</guid>
<link>https://academictorrents.com/details/05bcdffe07ae0021f564e7ee06e7609d44f73510</link>
<description/>
<size>2666320363</size>
</item><item>
<title>NOAA Weather Data 2002</title>
<category>Dataset</category>
<infohash>6f72e898eb94c51d19241f2904b29b25204702ea</infohash>
<guid>https://academictorrents.com/details/6f72e898eb94c51d19241f2904b29b25204702ea</guid>
<link>https://academictorrents.com/details/6f72e898eb94c51d19241f2904b29b25204702ea</link>
<description/>
<size>2461186641</size>
</item><item>
<title>NOAA Weather Data 2001</title>
<category>Dataset</category>
<infohash>610921783ab6eb8a71a649790be2bb34b911593c</infohash>
<guid>https://academictorrents.com/details/610921783ab6eb8a71a649790be2bb34b911593c</guid>
<link>https://academictorrents.com/details/610921783ab6eb8a71a649790be2bb34b911593c</link>
<description/>
<size>2154287256</size>
</item><item>
<title>NOAA Weather Data 2000</title>
<category>Dataset</category>
<infohash>9c683d7813e486c7ff541b79ed38d9cb0074fe1b</infohash>
<guid>https://academictorrents.com/details/9c683d7813e486c7ff541b79ed38d9cb0074fe1b</guid>
<link>https://academictorrents.com/details/9c683d7813e486c7ff541b79ed38d9cb0074fe1b</link>
<description/>
<size>2046874299</size>
</item><item>
<title>Stability of Bivariate GWAS Biomarker Detection.</title>
<category>Dataset</category>
<infohash>b104760d21c27d96cb813c5cfee795d92948566a</infohash>
<guid>https://academictorrents.com/details/b104760d21c27d96cb813c5cfee795d92948566a</guid>
<link>https://academictorrents.com/details/b104760d21c27d96cb813c5cfee795d92948566a</link>
<description>Given the difficulty and effort required to confirm candidate causal SNPs detected in genome-wide association studies (GWAS), there is no practical way to definitively filter false positives. Recent advances in algorithmics and statistics have enabled repeated exhaustive search for bivariate features in a practical amount of time using standard computational resources, allowing us to use cross-validation to evaluate the stability. We performed 10 trials of 2-fold cross-validation of exhaustive bivariate analysis on seven Wellcome-Trust Case-Control Consortium GWAS datasets, comparing the traditional [Formula: see text] test for association, the high-performance GBOOST method and the recently proposed GSS statistic (Available at http://bioinformatics.research.nicta.com.au/software/gwis/). We use Spearman s correlation to measure the similarity between the folds of cross validation. To compare incomplete lists of ranks we propose an extension to Spearman s correlation. The extension allows us to consider a natural threshold for feature selection where the correlation is zero. This is the first reported cross-validation study of exhaustive bivariate GWAS feature selection. We found that stability between ranked lists from different cross-validation folds was higher for GSS in the majority of diseases. A thorough analysis of the correlation between SNP-frequency and univariate [Formula: see text] score demonstrated that the [Formula: see text] test for association is highly confounded by main effects: SNPs with high univariate significance replicably dominate the ranked results. We show that removal of the univariately significant SNPs improves [Formula: see text] replicability but risks filtering pairs involving SNPs with univariate effects. We empirically confirm that the stability of GSS and GBOOST were not affected by removal of univariately significant SNPs. These results suggest that the GSS and GBOOST tests are successfully targeting bivariate association with phenotype and that GSS is able to reliably detect a larger set of SNP-pairs than GBOOST in the majority of the data we analysed. However, the [Formula: see text] test for association was confounded by main effects.</description>
<size>40999327512</size>
</item><item>
<title>MIT OCW 6.006 - Introduction to Algorithms, Fall 2011</title>
<category>Course</category>
<infohash>831041a9411abc5d9c4d58e38ae40e550f8455a1</infohash>
<guid>https://academictorrents.com/details/831041a9411abc5d9c4d58e38ae40e550f8455a1</guid>
<link>https://academictorrents.com/details/831041a9411abc5d9c4d58e38ae40e550f8455a1</link>
<description>This course provides an introduction to mathematical modeling of computational problems. It covers the common algorithms, algorithmic paradigms, and data structures used to solve these problems. The course emphasizes the relationship between algorithms and programming, and introduces basic performance measures and analysis techniques for these problems.</description>
<size>5774789526</size>
</item><item>
<title>LBL-CONN-7 Network Traces</title>
<category>Dataset</category>
<infohash>2060d7faa61dd774f9279be7f3f79cece12ed0ed</infohash>
<guid>https://academictorrents.com/details/2060d7faa61dd774f9279be7f3f79cece12ed0ed</guid>
<link>https://academictorrents.com/details/2060d7faa61dd774f9279be7f3f79cece12ed0ed</link>
<description>Description This trace contains thirty days  worth of all wide-area TCP connections between the Lawrence Berkeley Laboratory (LBL) and the rest of the world. Format The reduced trace was generated by tcp-reduce, and has the format explained in that script s documentation . Briefly, the trace is an ASCII file with one line per connection, with the following columns: timestamp duration protocol bytes sent by originator of the connection, or ? if not available bytes sent by responder to the connection, or ? if not available local host - the (renumbered) LBL host that participated in the connection remote host - the remote (non-LBL) host that participated in the connection. Remote hosts have not been renumbered, to allow for geographic analysis of the data. Please do not attempt any further traffic analysis regarding the remote hosts. state that the connection ended in. The two most important states are SF, indicating normal SYN/FIN completion, and REJ, indicating a rejected connection (initial SYN elicited a RST in reply). Other states are discussed in the tcp_reduce documentation . flags zero or more flags: L indicates the connection was initiated locally (i.e., the LBL host is the one that began the connection) N indicates the connection was with nearby U.C. Berkeley. When this dataset was captured, a filter was used so that only nntp traffic with UCB was included, so this flag is only ever set for nntp connections. Measurement The trace ran from midnight, Thursday, September 16 1993 through midnight, Friday, October 15 1993 (times are Pacific Standard Time), capturing 606,497 wide-area connections. The tracing was done on the Ethernet DMZ network over which flows all traffic into or out of the Lawrence Berkeley Laboratory, located in Berkeley, California. The raw trace was made using tcpdump on a Sun Sparcstation using the BPF kernel packet filter. Fewer than 15 SYN/FIN/RST packets in a million were dropped. Timestamps have microsecond precision. As noted above, the traffic was filtered to exclude connections with nearby UCB except for nntp. Privacy The LBL hosts in the trace have been renumbered. The remote hosts remain as full IP addresses, to allow for geographic analysis of the data. Please do not attempt any further traffic analysis regarding the remote hosts. Acknowledgements The trace was made by Vern Paxson (vern@ee.lbl.gov). In publications, please include one or more citations to the papers mentioned below, as appropriate. Publications The SF connections in this trace correspond to LBL-7 in the papers Empirically-Derived Analytic Models of Wide-Area TCP Connections, V. Paxson, IEEE/ACM Transactions on Networking, 2(4), pp. 316-336, August 1994; Growth Trends in Wide-Area TCP Connections, V. Paxson, IEEE Network, 8(4), pp. 8-17, July 1994; and Wide-Area Traffic: The Failure of Poisson Modeling, V. Paxson and S. Floyd, IEEE/ACM Transactions on Networking, 3(3), pp. 226-244, June 1995. Restrictions The trace may be freely redistributed.</description>
<size>15575483</size>
</item><item>
<title>BU-Web-Client Network Traces</title>
<category>Dataset</category>
<infohash>f305fe91840e1e117bdf27bd6c3970a69d90b92f</infohash>
<guid>https://academictorrents.com/details/f305fe91840e1e117bdf27bd6c3970a69d90b92f</guid>
<link>https://academictorrents.com/details/f305fe91840e1e117bdf27bd6c3970a69d90b92f</link>
<description>Description These traces contain records of the HTTP requests and user behavior of a set of Mosaic clients running in the Boston University Computer Science Department, spanning the timeframe of 21 November 1994 through 8 May 1995. During the data collection period a total of 9,633 Mosaic sessions were traced, representing a population of 762 different users, and resulting in 1,143,839 requests for data transfer. Format Trace logfiles contain the sequence of WWW object requests (whether the object was served from the local cache or from the network). Each log file name contains a user id number, converted from Unix UIDs via a one-way function that allows user IDs to be compared for equality but not to be easily traced back to particular users. The file name also gives the machine on which the session took place, and the Unix timestamp when the session started. Boston University is located in the United States Eastern Time Zone. For example, a file named con1.cs20.785526125 is a log of a session from user 1, on machine cs20, starting at time 785526125 (12:42:05 EST, Tuesday, November 22, 1994). Each line in a log corresponds to a single URL requested by the user; it contains the machine name, the timestamp when the request was made, the user id number, the URL, the size of the document (including the overhead of the protocol) and the object retrieval time in seconds (reflecting only actual communication time, and not including the intermediate processing performed by Mosaic in a multi-connection transfer). An example of a line from a condensed log is: cs20 785526142 920156 "http://cs-www.bu.edu/lib/pics/bu-logo.gif" 1804 0.484092 Lines with the number of bytes equal to 0 and retrieval delay equal to 0.0 mean that the request was satisfied by Mosaic s internal cache. Measurement To collect this data we installed an instrumented version of Mosaic in the general computing environment at Boston University s Computer Science Department. This environment consists principally of 37 SparcStation 2 workstations connected in a local network, which is divided in 2 subnets. Each workstation has its own local disk; logs were written to the local disk and subsequently transferred to a central repository. We began by collecting data on a subset of the workstations only, while testing our data collection process. This period lasted from 21 November 1994 until 17 January 1995. When we were statisfied that data collection was occurring correctly, we extended the data collection process to include all workstations; data collection then took place until 8 May 1995. Since Mosaic ceased to be the dominant browser in use by early March 1995, the most representative portion of the traces are those covering the period 21 November 1995 through 28 February 1995. Privacy The user IDs in these logs have been renumbered to protect privacy. Acknowledgements These logs were collected by the members of the Oceans research group at Boston University. Mosaic was instrumented by Carlos Cunha (carro@cs.bu.edu). When referring to the use of these traces in published work, please cite Characteristics of WWW Client Traces, Carlos A. Cunha, Azer Bestavros and Mark E. Crovella, Boston University Department of Computer Science, Technical Report TR-95-010, April 1995.</description>
<size>13786856</size>
</item><item>
<title>Massachusetts USGS 15cm Color Ortho Imagery (2008/2009) - JPEG2000 Format</title>
<category>Dataset</category>
<infohash>2080e36b9ba96a3736de959c28db6e039e5a8bc1</infohash>
<guid>https://academictorrents.com/details/2080e36b9ba96a3736de959c28db6e039e5a8bc1</guid>
<link>https://academictorrents.com/details/2080e36b9ba96a3736de959c28db6e039e5a8bc1</link>
<description>This data was converted from the MassGIS coq2008_15cm_sid data. Converted by Joseph Paul Cohen 2014 In spring 2008, the U.S. Geological Survey, as part of its Boston 133 Cities Urban Area mapping program, contracted for true-color imagery covering the metropolitan Boston area and beyond. Image type for the entire region (more than 1.7 million acres) is 24-bit, 3-band (red, green, blue) natural color. Each band has pixel values ranging 0-255. Pixel resolution is 30 cm., or approximately one foot. Additionally, 30 municipalities participated in the Boston Upgrade of the USGS project; these cities and towns contributed funding for separate flights to produce 4-band (red, green, blue, near-infrared) imagery. Pixel resolution for these images is 15 centimeters (approximately 6 inches). In spring 2009, USGS continued the project and 4-band 30cm imagery was obtained for the remainder of the state. Additionally, 14 municipalities provided funding for 4-band 15cm imagery to cover their communities. This digital orthoimagery can serve a variety of purposes, from general planning, to field reference for spatial analysis, to a tool for data development and revision of vector maps. It can also serve as a reference layer or basemap for myriad applications inside geographic information system (GIS) software. Images are available for download in the MrSID Generation 2 format, at 15:1 lossy compression ratio, 3 bands (RGB), as 1,500 meters × 1,500 meters tiles (based on the 2008/2009 USGS Color Ortho Index coq2008-09_index.pdf tiling scheme; refer to the 8-digit numbers in each tile).</description>
<size>79357042705</size>
</item><item>
<title>Wikipedia English Official Offline Edition 2014-01-02</title>
<category>Dataset</category>
<infohash>3ddc1eb44eb97e58d7636a9ffaa6924f2ed9fb40</infohash>
<guid>https://academictorrents.com/details/3ddc1eb44eb97e58d7636a9ffaa6924f2ed9fb40</guid>
<link>https://academictorrents.com/details/3ddc1eb44eb97e58d7636a9ffaa6924f2ed9fb40</link>
<description>Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.</description>
<size>10492810412</size>
</item><item>
<title>Wikipedia English Official Offline Edition 2014-02-03</title>
<category>Dataset</category>
<infohash>9512a1f6d21e5012c06a1c9b8e2dd4796ecc77a9</infohash>
<guid>https://academictorrents.com/details/9512a1f6d21e5012c06a1c9b8e2dd4796ecc77a9</guid>
<link>https://academictorrents.com/details/9512a1f6d21e5012c06a1c9b8e2dd4796ecc77a9</link>
<description>Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.</description>
<size>10587333911</size>
</item><item>
<title>Learning Probabilistic Models: An Expected Utility Maximization Approach</title>
<category>Paper</category>
<infohash>932f340dabe2e3416e210b9b62bfdb8bf6d7ffa3</infohash>
<guid>https://academictorrents.com/details/932f340dabe2e3416e210b9b62bfdb8bf6d7ffa3</guid>
<link>https://academictorrents.com/details/932f340dabe2e3416e210b9b62bfdb8bf6d7ffa3</link>
<description/>
<size>271695</size>
</item><item>
<title>Lossless Online Bayesian Bagging</title>
<category>Paper</category>
<infohash>3505ee186a69f85a659c187209e896c10ee5af83</infohash>
<guid>https://academictorrents.com/details/3505ee186a69f85a659c187209e896c10ee5af83</guid>
<link>https://academictorrents.com/details/3505ee186a69f85a659c187209e896c10ee5af83</link>
<description/>
<size>383895</size>
</item><item>
<title>PREA: Personalized Recommendation Algorithms Toolkit</title>
<category>Paper</category>
<infohash>20c5722a541aa1ba00308b29f32edddbf8d99f28</infohash>
<guid>https://academictorrents.com/details/20c5722a541aa1ba00308b29f32edddbf8d99f28</guid>
<link>https://academictorrents.com/details/20c5722a541aa1ba00308b29f32edddbf8d99f28</link>
<description/>
<size>290788</size>
</item><item>
<title>Feature Selection via Dependence Maximization</title>
<category>Paper</category>
<infohash>cf7bf7de805b2266ded5349020a94a6670341096</infohash>
<guid>https://academictorrents.com/details/cf7bf7de805b2266ded5349020a94a6670341096</guid>
<link>https://academictorrents.com/details/cf7bf7de805b2266ded5349020a94a6670341096</link>
<description/>
<size>1553412</size>
</item><item>
<title>Importance Sampling for Continuous Time Bayesian Networks</title>
<category>Paper</category>
<infohash>1667047cab708a174b089171bfaa40245bd7f83b</infohash>
<guid>https://academictorrents.com/details/1667047cab708a174b089171bfaa40245bd7f83b</guid>
<link>https://academictorrents.com/details/1667047cab708a174b089171bfaa40245bd7f83b</link>
<description/>
<size>246974</size>
</item><item>
<title>Shark(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>0b19f22d3cd0e9736fa1c8da332e083432c265e9</infohash>
<guid>https://academictorrents.com/details/0b19f22d3cd0e9736fa1c8da332e083432c265e9</guid>
<link>https://academictorrents.com/details/0b19f22d3cd0e9736fa1c8da332e083432c265e9</link>
<description/>
<size>503857</size>
</item><item>
<title>Designing Committees of Models through Deliberate Weighting of Data Points</title>
<category>Paper</category>
<infohash>234ab9d3e095f06f64a92ec1116b569e823463e8</infohash>
<guid>https://academictorrents.com/details/234ab9d3e095f06f64a92ec1116b569e823463e8</guid>
<link>https://academictorrents.com/details/234ab9d3e095f06f64a92ec1116b569e823463e8</link>
<description/>
<size>237348</size>
</item><item>
<title>Learning Evaluation Functions to Improve Optimization by Local Search</title>
<category>Paper</category>
<infohash>1192c9b53b9b8e9d0770920ed80fbb079ac3996d</infohash>
<guid>https://academictorrents.com/details/1192c9b53b9b8e9d0770920ed80fbb079ac3996d</guid>
<link>https://academictorrents.com/details/1192c9b53b9b8e9d0770920ed80fbb079ac3996d</link>
<description/>
<size>568358</size>
</item><item>
<title>Facilitating Score and Causal Inference Trees for Large Observational Studies</title>
<category>Paper</category>
<infohash>0ee5b8cf1a70ba71aba9c80006ac5f2bb93ad498</infohash>
<guid>https://academictorrents.com/details/0ee5b8cf1a70ba71aba9c80006ac5f2bb93ad498</guid>
<link>https://academictorrents.com/details/0ee5b8cf1a70ba71aba9c80006ac5f2bb93ad498</link>
<description/>
<size>355273</size>
</item><item>
<title>Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels</title>
<category>Paper</category>
<infohash>78694e1b7d119bf3f90c774a9d64c7397561fb05</infohash>
<guid>https://academictorrents.com/details/78694e1b7d119bf3f90c774a9d64c7397561fb05</guid>
<link>https://academictorrents.com/details/78694e1b7d119bf3f90c774a9d64c7397561fb05</link>
<description/>
<size>231505</size>
</item><item>
<title>Learnability, Stability and Uniform Convergence</title>
<category>Paper</category>
<infohash>3fa89ca74baca9a6183c78adcea426cc3f4bf3ef</infohash>
<guid>https://academictorrents.com/details/3fa89ca74baca9a6183c78adcea426cc3f4bf3ef</guid>
<link>https://academictorrents.com/details/3fa89ca74baca9a6183c78adcea426cc3f4bf3ef</link>
<description/>
<size>331233</size>
</item><item>
<title>Using Markov Blankets for Causal Structure Learning(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>8e08ddb27ae589a238417b68e587dcb48350055b</infohash>
<guid>https://academictorrents.com/details/8e08ddb27ae589a238417b68e587dcb48350055b</guid>
<link>https://academictorrents.com/details/8e08ddb27ae589a238417b68e587dcb48350055b</link>
<description/>
<size>506990</size>
</item><item>
<title>On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation</title>
<category>Paper</category>
<infohash>1fb1a2623f50c709710e610877d665ca1933fe01</infohash>
<guid>https://academictorrents.com/details/1fb1a2623f50c709710e610877d665ca1933fe01</guid>
<link>https://academictorrents.com/details/1fb1a2623f50c709710e610877d665ca1933fe01</link>
<description/>
<size>759286</size>
</item><item>
<title>Entropy Search for Information-Efficient Global Optimization</title>
<category>Paper</category>
<infohash>b05a6f20298620c47565f3a219372c7365d68fcb</infohash>
<guid>https://academictorrents.com/details/b05a6f20298620c47565f3a219372c7365d68fcb</guid>
<link>https://academictorrents.com/details/b05a6f20298620c47565f3a219372c7365d68fcb</link>
<description/>
<size>891099</size>
</item><item>
<title>Minimax-Optimal Rates For Sparse Additive Models Over Kernel Classes Via Convex Programming</title>
<category>Paper</category>
<infohash>77750329f24c8f314dd436be7b00bd6ec370ff2f</infohash>
<guid>https://academictorrents.com/details/77750329f24c8f314dd436be7b00bd6ec370ff2f</guid>
<link>https://academictorrents.com/details/77750329f24c8f314dd436be7b00bd6ec370ff2f</link>
<description/>
<size>316400</size>
</item><item>
<title>Local and Global Scaling Reduce Hubs in Space</title>
<category>Paper</category>
<infohash>4308fad66470491613d8e4886126fd2482090480</infohash>
<guid>https://academictorrents.com/details/4308fad66470491613d8e4886126fd2482090480</guid>
<link>https://academictorrents.com/details/4308fad66470491613d8e4886126fd2482090480</link>
<description/>
<size>388422</size>
</item><item>
<title>Finding Recurrent Patterns from Continuous Sign Language Sentences for Automated Extraction of Signs</title>
<category>Paper</category>
<infohash>b37898e31c5ac61100e878ce252025c023470ad5</infohash>
<guid>https://academictorrents.com/details/b37898e31c5ac61100e878ce252025c023470ad5</guid>
<link>https://academictorrents.com/details/b37898e31c5ac61100e878ce252025c023470ad5</link>
<description/>
<size>982838</size>
</item><item>
<title>Introduction to Special Issue on Independent Components Analysis</title>
<category>Paper</category>
<infohash>57337b3b7e9018a8f640c5033bd6c77d238b03e8</infohash>
<guid>https://academictorrents.com/details/57337b3b7e9018a8f640c5033bd6c77d238b03e8</guid>
<link>https://academictorrents.com/details/57337b3b7e9018a8f640c5033bd6c77d238b03e8</link>
<description/>
<size>259595</size>
</item><item>
<title>A Compression Approach to Support Vector Model Selection</title>
<category>Paper</category>
<infohash>88785a8c94e9ffa4fca127ae27f7a5109eb3648b</infohash>
<guid>https://academictorrents.com/details/88785a8c94e9ffa4fca127ae27f7a5109eb3648b</guid>
<link>https://academictorrents.com/details/88785a8c94e9ffa4fca127ae27f7a5109eb3648b</link>
<description/>
<size>1358264</size>
</item><item>
<title>Special Issue on the Eighteenth International Conference on Machine Learning (ICML2001)</title>
<category>Paper</category>
<infohash>818692398ce10db04c3c86df13a5732212f65ef9</infohash>
<guid>https://academictorrents.com/details/818692398ce10db04c3c86df13a5732212f65ef9</guid>
<link>https://academictorrents.com/details/818692398ce10db04c3c86df13a5732212f65ef9</link>
<description/>
<size>443405</size>
</item><item>
<title>Continuous Time Bayesian Network Reasoning and Learning Engine</title>
<category>Paper</category>
<infohash>bfda57d28a4b8a0370bcc80bf096f64c08b6c3b6</infohash>
<guid>https://academictorrents.com/details/bfda57d28a4b8a0370bcc80bf096f64c08b6c3b6</guid>
<link>https://academictorrents.com/details/bfda57d28a4b8a0370bcc80bf096f64c08b6c3b6</link>
<description/>
<size>34130</size>
</item><item>
<title>Learning Semantic Lexicons from a Part-of-Speech and Semantically Tagged Corpus Using Inductive Logic Programming</title>
<category>Paper</category>
<infohash>dd06605483247bc97dfdf2a688627f9552792557</infohash>
<guid>https://academictorrents.com/details/dd06605483247bc97dfdf2a688627f9552792557</guid>
<link>https://academictorrents.com/details/dd06605483247bc97dfdf2a688627f9552792557</link>
<description/>
<size>533074</size>
</item><item>
<title>A Unified View of Performance Metrics: Translating Threshold Choice into Expected Classification Loss</title>
<category>Paper</category>
<infohash>dc07b26f8acd99dfc7d1763d46079930a1d1b4c5</infohash>
<guid>https://academictorrents.com/details/dc07b26f8acd99dfc7d1763d46079930a1d1b4c5</guid>
<link>https://academictorrents.com/details/dc07b26f8acd99dfc7d1763d46079930a1d1b4c5</link>
<description/>
<size>563388</size>
</item><item>
<title>Statistical Dynamics of On-line Independent Component Analysis</title>
<category>Paper</category>
<infohash>3b1322671d42bb6e951a700fad288137dcaefece</infohash>
<guid>https://academictorrents.com/details/3b1322671d42bb6e951a700fad288137dcaefece</guid>
<link>https://academictorrents.com/details/3b1322671d42bb6e951a700fad288137dcaefece</link>
<description/>
<size>1016257</size>
</item><item>
<title>Latent Dirichlet Allocation</title>
<category>Paper</category>
<infohash>886506949d35110e5cfcf1cd5c5ffb16e0dc904b</infohash>
<guid>https://academictorrents.com/details/886506949d35110e5cfcf1cd5c5ffb16e0dc904b</guid>
<link>https://academictorrents.com/details/886506949d35110e5cfcf1cd5c5ffb16e0dc904b</link>
<description/>
<size>388437</size>
</item><item>
<title>Model Averaging for Prediction with Discrete Bayesian Networks</title>
<category>Paper</category>
<infohash>65a8c28e71972ecba5d24fd655b627f1b2423960</infohash>
<guid>https://academictorrents.com/details/65a8c28e71972ecba5d24fd655b627f1b2423960</guid>
<link>https://academictorrents.com/details/65a8c28e71972ecba5d24fd655b627f1b2423960</link>
<description/>
<size>251423</size>
</item><item>
<title>On the Equivalence of Linear Dimensionality-Reducing Transformations</title>
<category>Paper</category>
<infohash>709afa4fc8581b99cd6f0991f6f660786f2326a3</infohash>
<guid>https://academictorrents.com/details/709afa4fc8581b99cd6f0991f6f660786f2326a3</guid>
<link>https://academictorrents.com/details/709afa4fc8581b99cd6f0991f6f660786f2326a3</link>
<description/>
<size>336137</size>
</item><item>
<title>Multi-Assignment Clustering for Boolean Data</title>
<category>Paper</category>
<infohash>df72006546f40f6fc8313fdec7dd4e10d4b913c1</infohash>
<guid>https://academictorrents.com/details/df72006546f40f6fc8313fdec7dd4e10d4b913c1</guid>
<link>https://academictorrents.com/details/df72006546f40f6fc8313fdec7dd4e10d4b913c1</link>
<description/>
<size>604577</size>
</item><item>
<title>A Kernel Two-Sample Test</title>
<category>Paper</category>
<infohash>234705eaac998d5c04b8e4c8a096c120d66404a0</infohash>
<guid>https://academictorrents.com/details/234705eaac998d5c04b8e4c8a096c120d66404a0</guid>
<link>https://academictorrents.com/details/234705eaac998d5c04b8e4c8a096c120d66404a0</link>
<description/>
<size>479876</size>
</item><item>
<title>On the Necessity of Irrelevant Variables</title>
<category>Paper</category>
<infohash>ffa02bdccbfd01ac5ce35c2bfee6210abb4ddd0f</infohash>
<guid>https://academictorrents.com/details/ffa02bdccbfd01ac5ce35c2bfee6210abb4ddd0f</guid>
<link>https://academictorrents.com/details/ffa02bdccbfd01ac5ce35c2bfee6210abb4ddd0f</link>
<description/>
<size>299759</size>
</item><item>
<title>A Local Spectral Method for Graphs: With Applications to Improving Graph Partitions and Exploring Data Graphs Locally</title>
<category>Paper</category>
<infohash>5a0a7c5620fb0a6525dd940cff562869c737a978</infohash>
<guid>https://academictorrents.com/details/5a0a7c5620fb0a6525dd940cff562869c737a978</guid>
<link>https://academictorrents.com/details/5a0a7c5620fb0a6525dd940cff562869c737a978</link>
<description/>
<size>968869</size>
</item><item>
<title>Extensions to Metric-Based Model Selection</title>
<category>Paper</category>
<infohash>93a55ffbc30b6cfefaa4b665e979717e99d4405e</infohash>
<guid>https://academictorrents.com/details/93a55ffbc30b6cfefaa4b665e979717e99d4405e</guid>
<link>https://academictorrents.com/details/93a55ffbc30b6cfefaa4b665e979717e99d4405e</link>
<description/>
<size>657989</size>
</item><item>
<title>Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise</title>
<category>Paper</category>
<infohash>d1c93e3859c2d0f6cc9a7e4ad1a6a04c28f2d5c9</infohash>
<guid>https://academictorrents.com/details/d1c93e3859c2d0f6cc9a7e4ad1a6a04c28f2d5c9</guid>
<link>https://academictorrents.com/details/d1c93e3859c2d0f6cc9a7e4ad1a6a04c28f2d5c9</link>
<description/>
<size>280196</size>
</item><item>
<title>Erratum: SGDQN is Less Careful than Expected</title>
<category>Paper</category>
<infohash>5ae1c9c1b06c251c154efefee93dcb23539d914e</infohash>
<guid>https://academictorrents.com/details/5ae1c9c1b06c251c154efefee93dcb23539d914e</guid>
<link>https://academictorrents.com/details/5ae1c9c1b06c251c154efefee93dcb23539d914e</link>
<description/>
<size>998426</size>
</item><item>
<title>Minimax Manifold Estimation</title>
<category>Paper</category>
<infohash>5a779dec342b3228270697565c01547007ba131a</infohash>
<guid>https://academictorrents.com/details/5a779dec342b3228270697565c01547007ba131a</guid>
<link>https://academictorrents.com/details/5a779dec342b3228270697565c01547007ba131a</link>
<description/>
<size>2325836</size>
</item><item>
<title>A Neural Probabilistic Language Model</title>
<category>Paper</category>
<infohash>ccdfe60f5bb75ca85473c90d483e3802d53f5d12</infohash>
<guid>https://academictorrents.com/details/ccdfe60f5bb75ca85473c90d483e3802d53f5d12</guid>
<link>https://academictorrents.com/details/ccdfe60f5bb75ca85473c90d483e3802d53f5d12</link>
<description/>
<size>242236</size>
</item><item>
<title>Lagrangian Support Vector Machines (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>50df310af8d5acc4986cafae0cd716decb2c0592</infohash>
<guid>https://academictorrents.com/details/50df310af8d5acc4986cafae0cd716decb2c0592</guid>
<link>https://academictorrents.com/details/50df310af8d5acc4986cafae0cd716decb2c0592</link>
<description/>
<size>280392</size>
</item><item>
<title>Pairwise Support Vector Machines and their Application to Large Scale Problems</title>
<category>Paper</category>
<infohash>d058dbfcc0abbcc7ea60eb1696f2d463f692a241</infohash>
<guid>https://academictorrents.com/details/d058dbfcc0abbcc7ea60eb1696f2d463f692a241</guid>
<link>https://academictorrents.com/details/d058dbfcc0abbcc7ea60eb1696f2d463f692a241</link>
<description/>
<size>262888</size>
</item><item>
<title>Lyapunov Design for Safe Reinforcement Learning</title>
<category>Paper</category>
<infohash>622c9de43b3ce42daa46a9622e36d3e5bb2c1a21</infohash>
<guid>https://academictorrents.com/details/622c9de43b3ce42daa46a9622e36d3e5bb2c1a21</guid>
<link>https://academictorrents.com/details/622c9de43b3ce42daa46a9622e36d3e5bb2c1a21</link>
<description/>
<size>1127186</size>
</item><item>
<title>Approximations for Binary Gaussian Process Classification</title>
<category>Paper</category>
<infohash>23b7baf9db5d41de8f1f5f57e25f97c26a2637e6</infohash>
<guid>https://academictorrents.com/details/23b7baf9db5d41de8f1f5f57e25f97c26a2637e6</guid>
<link>https://academictorrents.com/details/23b7baf9db5d41de8f1f5f57e25f97c26a2637e6</link>
<description/>
<size>62856</size>
</item><item>
<title>Sally: A Tool for Embedding Strings in Vector Spaces</title>
<category>Paper</category>
<infohash>acef731ff2e0ed888e7f860cb27c1417866c4f95</infohash>
<guid>https://academictorrents.com/details/acef731ff2e0ed888e7f860cb27c1417866c4f95</guid>
<link>https://academictorrents.com/details/acef731ff2e0ed888e7f860cb27c1417866c4f95</link>
<description/>
<size>413354</size>
</item><item>
<title>Rational Kernels: Theory and Algorithms (Special Topic on Learning Theory)</title>
<category>Paper</category>
<infohash>64aacc7f046be66cfcc98e2c684bc5aa1c97d7ca</infohash>
<guid>https://academictorrents.com/details/64aacc7f046be66cfcc98e2c684bc5aa1c97d7ca</guid>
<link>https://academictorrents.com/details/64aacc7f046be66cfcc98e2c684bc5aa1c97d7ca</link>
<description/>
<size>322046</size>
</item><item>
<title>A Robust Minimax Approach to Classification</title>
<category>Paper</category>
<infohash>bff8467895bf3ea2712ace7d87efb7a5f85b560b</infohash>
<guid>https://academictorrents.com/details/bff8467895bf3ea2712ace7d87efb7a5f85b560b</guid>
<link>https://academictorrents.com/details/bff8467895bf3ea2712ace7d87efb7a5f85b560b</link>
<description/>
<size>523763</size>
</item><item>
<title>No Unbiased Estimator of the Variance of K-Fold Cross-Validation</title>
<category>Paper</category>
<infohash>cb8c8a3f743a330e249ef360ba6fa2147f4702ba</infohash>
<guid>https://academictorrents.com/details/cb8c8a3f743a330e249ef360ba6fa2147f4702ba</guid>
<link>https://academictorrents.com/details/cb8c8a3f743a330e249ef360ba6fa2147f4702ba</link>
<description/>
<size>214685</size>
</item><item>
<title>Knowledge-Based Kernel Approximation</title>
<category>Paper</category>
<infohash>501e017a78ab24b0063bd041f675af7675689b20</infohash>
<guid>https://academictorrents.com/details/501e017a78ab24b0063bd041f675af7675689b20</guid>
<link>https://academictorrents.com/details/501e017a78ab24b0063bd041f675af7675689b20</link>
<description/>
<size>152553</size>
</item><item>
<title>Regularized Discriminant Analysis, Ridge Regression and Beyond</title>
<category>Paper</category>
<infohash>e237f7aaaa9cd93a68b3abe3583c1bb75dcaad3d</infohash>
<guid>https://academictorrents.com/details/e237f7aaaa9cd93a68b3abe3583c1bb75dcaad3d</guid>
<link>https://academictorrents.com/details/e237f7aaaa9cd93a68b3abe3583c1bb75dcaad3d</link>
<description/>
<size>332200</size>
</item><item>
<title>Comments on the Complete Characterization of a Family of Solutions to a Generalized Fisher Criterion</title>
<category>Paper</category>
<infohash>5aa0104d675b610e87a5b845a03140ea4657cc0d</infohash>
<guid>https://academictorrents.com/details/5aa0104d675b610e87a5b845a03140ea4657cc0d</guid>
<link>https://academictorrents.com/details/5aa0104d675b610e87a5b845a03140ea4657cc0d</link>
<description/>
<size>439537</size>
</item><item>
<title>On the Rate of Convergence of Regularized Boosting Classifiers</title>
<category>Paper</category>
<infohash>6df40c5b34af78337350e58acf68b7996400ca1e</infohash>
<guid>https://academictorrents.com/details/6df40c5b34af78337350e58acf68b7996400ca1e</guid>
<link>https://academictorrents.com/details/6df40c5b34af78337350e58acf68b7996400ca1e</link>
<description/>
<size>198893</size>
</item><item>
<title>Ultraconservative Online Algorithms for Multiclass Problems</title>
<category>Paper</category>
<infohash>2ea5a07cb4d042f1e177715b5104d8a4b3d78332</infohash>
<guid>https://academictorrents.com/details/2ea5a07cb4d042f1e177715b5104d8a4b3d78332</guid>
<link>https://academictorrents.com/details/2ea5a07cb4d042f1e177715b5104d8a4b3d78332</link>
<description/>
<size>6696501</size>
</item><item>
<title>Nash Q-Learning for General-Sum Stochastic Games</title>
<category>Paper</category>
<infohash>4e78d4676e3609d7748288f77ae9a4df5732766f</infohash>
<guid>https://academictorrents.com/details/4e78d4676e3609d7748288f77ae9a4df5732766f</guid>
<link>https://academictorrents.com/details/4e78d4676e3609d7748288f77ae9a4df5732766f</link>
<description/>
<size>780696</size>
</item><item>
<title>R-MAX - A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning</title>
<category>Paper</category>
<infohash>3699dda91f82f2c6083f50166e3675779762ee93</infohash>
<guid>https://academictorrents.com/details/3699dda91f82f2c6083f50166e3675779762ee93</guid>
<link>https://academictorrents.com/details/3699dda91f82f2c6083f50166e3675779762ee93</link>
<description/>
<size>101231</size>
</item><item>
<title>Security Analysis of Online Centroid Anomaly Detection</title>
<category>Paper</category>
<infohash>d3d9e74feb33847b8965f7c7a7e2fa0f2c30cfde</infohash>
<guid>https://academictorrents.com/details/d3d9e74feb33847b8965f7c7a7e2fa0f2c30cfde</guid>
<link>https://academictorrents.com/details/d3d9e74feb33847b8965f7c7a7e2fa0f2c30cfde</link>
<description/>
<size>912389</size>
</item><item>
<title>Collective Inference for Extraction MRFs Coupled with Symmetric Clique Potentials</title>
<category>Paper</category>
<infohash>e88fe7bda5fbabd26c7bbcb05a2c133bc54a789d</infohash>
<guid>https://academictorrents.com/details/e88fe7bda5fbabd26c7bbcb05a2c133bc54a789d</guid>
<link>https://academictorrents.com/details/e88fe7bda5fbabd26c7bbcb05a2c133bc54a789d</link>
<description/>
<size>465776</size>
</item><item>
<title>Exploration in Relational Domains for Model-based Reinforcement Learning</title>
<category>Paper</category>
<infohash>a13eed285b9f81acac1a8cbb4bedf91ce1524b76</infohash>
<guid>https://academictorrents.com/details/a13eed285b9f81acac1a8cbb4bedf91ce1524b76</guid>
<link>https://academictorrents.com/details/a13eed285b9f81acac1a8cbb4bedf91ce1524b76</link>
<description/>
<size>697869</size>
</item><item>
<title>Online Choice of Active Learning Algorithms</title>
<category>Paper</category>
<infohash>8884920c4f634ffb624b921eb9046dfaecacf6b1</infohash>
<guid>https://academictorrents.com/details/8884920c4f634ffb624b921eb9046dfaecacf6b1</guid>
<link>https://academictorrents.com/details/8884920c4f634ffb624b921eb9046dfaecacf6b1</link>
<description/>
<size>518473</size>
</item><item>
<title>Algorithms for Sparse Linear Classifiers in the Massive Data Setting</title>
<category>Paper</category>
<infohash>a2a44d367d849fd323b3da394f52b1b4bbd38ae9</infohash>
<guid>https://academictorrents.com/details/a2a44d367d849fd323b3da394f52b1b4bbd38ae9</guid>
<link>https://academictorrents.com/details/a2a44d367d849fd323b3da394f52b1b4bbd38ae9</link>
<description/>
<size>386096</size>
</item><item>
<title>Composite Binary Losses</title>
<category>Paper</category>
<infohash>ae3c57eec19b8e3217fef758e20b4461cda5bc0c</infohash>
<guid>https://academictorrents.com/details/ae3c57eec19b8e3217fef758e20b4461cda5bc0c</guid>
<link>https://academictorrents.com/details/ae3c57eec19b8e3217fef758e20b4461cda5bc0c</link>
<description/>
<size>706127</size>
</item><item>
<title>On-Line Sequential Bin Packing</title>
<category>Paper</category>
<infohash>0495f04d838b46686431fd619f3e215bceaa2788</infohash>
<guid>https://academictorrents.com/details/0495f04d838b46686431fd619f3e215bceaa2788</guid>
<link>https://academictorrents.com/details/0495f04d838b46686431fd619f3e215bceaa2788</link>
<description/>
<size>166101</size>
</item><item>
<title>Quantum Set Intersection and its Application to Associative Memory</title>
<category>Paper</category>
<infohash>34a70fa34c879bf42656631c32a7712e99dcd028</infohash>
<guid>https://academictorrents.com/details/34a70fa34c879bf42656631c32a7712e99dcd028</guid>
<link>https://academictorrents.com/details/34a70fa34c879bf42656631c32a7712e99dcd028</link>
<description/>
<size>404248</size>
</item><item>
<title>Model-based Boosting 2.0</title>
<category>Paper</category>
<infohash>3cce95f965195c957591a4127581ae75070ec855</infohash>
<guid>https://academictorrents.com/details/3cce95f965195c957591a4127581ae75070ec855</guid>
<link>https://academictorrents.com/details/3cce95f965195c957591a4127581ae75070ec855</link>
<description/>
<size>103265</size>
</item><item>
<title>Covariance in Unsupervised Learning of Probabilistic Grammars</title>
<category>Paper</category>
<infohash>6d65bd5ba3f10d7dd7759758cb94dbfd4b3e8677</infohash>
<guid>https://academictorrents.com/details/6d65bd5ba3f10d7dd7759758cb94dbfd4b3e8677</guid>
<link>https://academictorrents.com/details/6d65bd5ba3f10d7dd7759758cb94dbfd4b3e8677</link>
<description/>
<size>383374</size>
</item><item>
<title>Hit Miss Networks with Applications to Instance Selection</title>
<category>Paper</category>
<infohash>756bb7c2680231279d573ed63f0b1e24da2cea57</infohash>
<guid>https://academictorrents.com/details/756bb7c2680231279d573ed63f0b1e24da2cea57</guid>
<link>https://academictorrents.com/details/756bb7c2680231279d573ed63f0b1e24da2cea57</link>
<description/>
<size>195861</size>
</item><item>
<title>Lp-Nested Symmetric Distributions</title>
<category>Paper</category>
<infohash>7f313160e766098db344b1656e2ed8789c49c69a</infohash>
<guid>https://academictorrents.com/details/7f313160e766098db344b1656e2ed8789c49c69a</guid>
<link>https://academictorrents.com/details/7f313160e766098db344b1656e2ed8789c49c69a</link>
<description/>
<size>3369257</size>
</item><item>
<title>ILP: A Short Look Back and a Longer Look Forward</title>
<category>Paper</category>
<infohash>9ed45b61015c6141c2852e0cc4f63b978ee6f7eb</infohash>
<guid>https://academictorrents.com/details/9ed45b61015c6141c2852e0cc4f63b978ee6f7eb</guid>
<link>https://academictorrents.com/details/9ed45b61015c6141c2852e0cc4f63b978ee6f7eb</link>
<description/>
<size>11597</size>
</item><item>
<title>Inducing Tree-Substitution Grammars</title>
<category>Paper</category>
<infohash>0f7ed39fca9f78a55a6902e64890856b66fcb5e4</infohash>
<guid>https://academictorrents.com/details/0f7ed39fca9f78a55a6902e64890856b66fcb5e4</guid>
<link>https://academictorrents.com/details/0f7ed39fca9f78a55a6902e64890856b66fcb5e4</link>
<description/>
<size>1118269</size>
</item><item>
<title>Linear Algorithms for Online Multitask Classification</title>
<category>Paper</category>
<infohash>e90de45a88ab4dc26492b4489f438d890281a288</infohash>
<guid>https://academictorrents.com/details/e90de45a88ab4dc26492b4489f438d890281a288</guid>
<link>https://academictorrents.com/details/e90de45a88ab4dc26492b4489f438d890281a288</link>
<description/>
<size>402565</size>
</item><item>
<title>A Convergent Online Single Time Scale Actor Critic Algorithm</title>
<category>Paper</category>
<infohash>2d26acfe408e70b3c26efacea54735088bf82c65</infohash>
<guid>https://academictorrents.com/details/2d26acfe408e70b3c26efacea54735088bf82c65</guid>
<link>https://academictorrents.com/details/2d26acfe408e70b3c26efacea54735088bf82c65</link>
<description/>
<size>325612</size>
</item><item>
<title>Unsupervised Supervised Learning I: Estimating Classification and Regression Errors without Labels</title>
<category>Paper</category>
<infohash>0d23c6099604e45283962057800a6e42f6384b2d</infohash>
<guid>https://academictorrents.com/details/0d23c6099604e45283962057800a6e42f6384b2d</guid>
<link>https://academictorrents.com/details/0d23c6099604e45283962057800a6e42f6384b2d</link>
<description/>
<size>642663</size>
</item><item>
<title>Boosting as a Regularized Path to a Maximum Margin Classifier</title>
<category>Paper</category>
<infohash>6bf88072c9021e637a8a624e79c15e70961ede1e</infohash>
<guid>https://academictorrents.com/details/6bf88072c9021e637a8a624e79c15e70961ede1e</guid>
<link>https://academictorrents.com/details/6bf88072c9021e637a8a624e79c15e70961ede1e</link>
<description/>
<size>1262309</size>
</item><item>
<title>Support Vector Machine Soft Margin Classifiers: Error Analysis</title>
<category>Paper</category>
<infohash>aa95f64966993e878db72d2cc3aecaf1d475716b</infohash>
<guid>https://academictorrents.com/details/aa95f64966993e878db72d2cc3aecaf1d475716b</guid>
<link>https://academictorrents.com/details/aa95f64966993e878db72d2cc3aecaf1d475716b</link>
<description/>
<size>2700586</size>
</item><item>
<title>Energy-Based Models for Sparse Overcomplete Representations</title>
<category>Paper</category>
<infohash>4108bc464fce4a222c2e26396eb6a628259044d3</infohash>
<guid>https://academictorrents.com/details/4108bc464fce4a222c2e26396eb6a628259044d3</guid>
<link>https://academictorrents.com/details/4108bc464fce4a222c2e26396eb6a628259044d3</link>
<description/>
<size>218010</size>
</item><item>
<title>Sufficient Dimensionality Reduction (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>8663822d38c2e62dfc0bc063e839688ba68088ed</infohash>
<guid>https://academictorrents.com/details/8663822d38c2e62dfc0bc063e839688ba68088ed</guid>
<link>https://academictorrents.com/details/8663822d38c2e62dfc0bc063e839688ba68088ed</link>
<description/>
<size>195344</size>
</item><item>
<title>FINkNN: A Fuzzy Interval Number k-Nearest Neighbor Classifier for Prediction of Sugar Production from Populations of Samples</title>
<category>Paper</category>
<infohash>502de11e8c7df6ccbf8b37f8d1a1b516a2cd577d</infohash>
<guid>https://academictorrents.com/details/502de11e8c7df6ccbf8b37f8d1a1b516a2cd577d</guid>
<link>https://academictorrents.com/details/502de11e8c7df6ccbf8b37f8d1a1b516a2cd577d</link>
<description/>
<size>360764</size>
</item><item>
<title>Dimensionality Estimation, Manifold Learning and Function Approximation using Tensor Voting</title>
<category>Paper</category>
<infohash>732e7896b2fdd062e94a459d6ba4f62ced4c7f29</infohash>
<guid>https://academictorrents.com/details/732e7896b2fdd062e94a459d6ba4f62ced4c7f29</guid>
<link>https://academictorrents.com/details/732e7896b2fdd062e94a459d6ba4f62ced4c7f29</link>
<description/>
<size>662375</size>
</item><item>
<title>PAC-Bayesian Analysis of Co-clustering and Beyond</title>
<category>Paper</category>
<infohash>449b5b080eb4f043034dc16497d31ee95b97acf7</infohash>
<guid>https://academictorrents.com/details/449b5b080eb4f043034dc16497d31ee95b97acf7</guid>
<link>https://academictorrents.com/details/449b5b080eb4f043034dc16497d31ee95b97acf7</link>
<description/>
<size>614956</size>
</item><item>
<title>Confidence-Weighted Linear Classification for Text Categorization</title>
<category>Paper</category>
<infohash>0d19898b3e3b4293787d1e7b210062161986be93</infohash>
<guid>https://academictorrents.com/details/0d19898b3e3b4293787d1e7b210062161986be93</guid>
<link>https://academictorrents.com/details/0d19898b3e3b4293787d1e7b210062161986be93</link>
<description/>
<size>2042450</size>
</item><item>
<title>Eliminating Spammers and Ranking Annotators for Crowdsourced Labeling Tasks</title>
<category>Paper</category>
<infohash>a9c761b751ed6ffbc5058ce97d767441844aaa08</infohash>
<guid>https://academictorrents.com/details/a9c761b751ed6ffbc5058ce97d767441844aaa08</guid>
<link>https://academictorrents.com/details/a9c761b751ed6ffbc5058ce97d767441844aaa08</link>
<description/>
<size>449191</size>
</item><item>
<title>Near-Optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies</title>
<category>Paper</category>
<infohash>87f561d3d72af9b20f9d0298efddb88f948591db</infohash>
<guid>https://academictorrents.com/details/87f561d3d72af9b20f9d0298efddb88f948591db</guid>
<link>https://academictorrents.com/details/87f561d3d72af9b20f9d0298efddb88f948591db</link>
<description/>
<size>43186</size>
</item><item>
<title>Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data</title>
<category>Paper</category>
<infohash>bc809547166bcf9d0fb0bcb7b277ddaccf7c4f69</infohash>
<guid>https://academictorrents.com/details/bc809547166bcf9d0fb0bcb7b277ddaccf7c4f69</guid>
<link>https://academictorrents.com/details/bc809547166bcf9d0fb0bcb7b277ddaccf7c4f69</link>
<description/>
<size>682234</size>
</item><item>
<title>Preference Elicitation via Theory Refinement</title>
<category>Paper</category>
<infohash>d0c45984badd217ca59844462573dc5edec9574b</infohash>
<guid>https://academictorrents.com/details/d0c45984badd217ca59844462573dc5edec9574b</guid>
<link>https://academictorrents.com/details/d0c45984badd217ca59844462573dc5edec9574b</link>
<description/>
<size>164756</size>
</item><item>
<title>The Representational Power of Discrete Bayesian Networks</title>
<category>Paper</category>
<infohash>937e6389c6ace73d55931e39e9ae1e3d1ac6e72c</infohash>
<guid>https://academictorrents.com/details/937e6389c6ace73d55931e39e9ae1e3d1ac6e72c</guid>
<link>https://academictorrents.com/details/937e6389c6ace73d55931e39e9ae1e3d1ac6e72c</link>
<description/>
<size>457485</size>
</item><item>
<title>Transfer in Reinforcement Learning via Shared Features</title>
<category>Paper</category>
<infohash>ad7c56096190d25cd8e362b9a89e4b289b46c3b8</infohash>
<guid>https://academictorrents.com/details/ad7c56096190d25cd8e362b9a89e4b289b46c3b8</guid>
<link>https://academictorrents.com/details/ad7c56096190d25cd8e362b9a89e4b289b46c3b8</link>
<description/>
<size>434223</size>
</item><item>
<title>PAC-Bayes Bounds with Data Dependent Priors</title>
<category>Paper</category>
<infohash>13efa078bace9fe68035ee3da1f60e4b3270e1e3</infohash>
<guid>https://academictorrents.com/details/13efa078bace9fe68035ee3da1f60e4b3270e1e3</guid>
<link>https://academictorrents.com/details/13efa078bace9fe68035ee3da1f60e4b3270e1e3</link>
<description/>
<size>325019</size>
</item><item>
<title>Limitations of Learning Via Embeddings in Euclidean Half Spaces</title>
<category>Paper</category>
<infohash>fd6c5f091409009d84c12d68d4dcc1f272a2ac95</infohash>
<guid>https://academictorrents.com/details/fd6c5f091409009d84c12d68d4dcc1f272a2ac95</guid>
<link>https://academictorrents.com/details/fd6c5f091409009d84c12d68d4dcc1f272a2ac95</link>
<description/>
<size>283793</size>
</item><item>
<title>Smoothing Multivariate Performance Measures</title>
<category>Paper</category>
<infohash>821f225940a7aed9f6d38367cd54684bbd0f25a6</infohash>
<guid>https://academictorrents.com/details/821f225940a7aed9f6d38367cd54684bbd0f25a6</guid>
<link>https://academictorrents.com/details/821f225940a7aed9f6d38367cd54684bbd0f25a6</link>
<description/>
<size>858607</size>
</item><item>
<title>Concentration Inequalities for the Missing Mass and for Histogram Rule Error</title>
<category>Paper</category>
<infohash>584127b9189720f45039750333be5e5625a9c1ce</infohash>
<guid>https://academictorrents.com/details/584127b9189720f45039750333be5e5625a9c1ce</guid>
<link>https://academictorrents.com/details/584127b9189720f45039750333be5e5625a9c1ce</link>
<description/>
<size>323711</size>
</item><item>
<title>On the Importance of Small Coordinate Projections</title>
<category>Paper</category>
<infohash>75c3a5eaf6b12fd9a7b08a4e10d6ec784af30ae6</infohash>
<guid>https://academictorrents.com/details/75c3a5eaf6b12fd9a7b08a4e10d6ec784af30ae6</guid>
<link>https://academictorrents.com/details/75c3a5eaf6b12fd9a7b08a4e10d6ec784af30ae6</link>
<description/>
<size>215553</size>
</item><item>
<title>A Multiscale Framework For Blind Separation of Linearly Mixed Signals</title>
<category>Paper</category>
<infohash>d63cbc0620adcf668aebd2f0274e9e8c2bc0a5d9</infohash>
<guid>https://academictorrents.com/details/d63cbc0620adcf668aebd2f0274e9e8c2bc0a5d9</guid>
<link>https://academictorrents.com/details/d63cbc0620adcf668aebd2f0274e9e8c2bc0a5d9</link>
<description/>
<size>3357313</size>
</item><item>
<title>On the Rate of Convergence of the Bagged Nearest Neighbor Estimate</title>
<category>Paper</category>
<infohash>8e2157d9ced05bfd778af368695c050027f37038</infohash>
<guid>https://academictorrents.com/details/8e2157d9ced05bfd778af368695c050027f37038</guid>
<link>https://academictorrents.com/details/8e2157d9ced05bfd778af368695c050027f37038</link>
<description/>
<size>193656</size>
</item><item>
<title>The Principled Design of Large-Scale Recursive Neural Network Architectures--DAG-RNNs and the Protein Structure Prediction Problem</title>
<category>Paper</category>
<infohash>03262bd503378b1d0c274a01d4022ad0ecb2de1d</infohash>
<guid>https://academictorrents.com/details/03262bd503378b1d0c274a01d4022ad0ecb2de1d</guid>
<link>https://academictorrents.com/details/03262bd503378b1d0c274a01d4022ad0ecb2de1d</link>
<description/>
<size>926725</size>
</item><item>
<title>Reinforcement Learning with Factored States and Actions</title>
<category>Paper</category>
<infohash>6776a581124c86f12e3aca0c4629f9e71adc45d6</infohash>
<guid>https://academictorrents.com/details/6776a581124c86f12e3aca0c4629f9e71adc45d6</guid>
<link>https://academictorrents.com/details/6776a581124c86f12e3aca0c4629f9e71adc45d6</link>
<description/>
<size>210671</size>
</item><item>
<title>Regret Bounds and Minimax Policies under Partial Monitoring</title>
<category>Paper</category>
<infohash>2113a0005716fde62b1201c4a2c846a615b30b66</infohash>
<guid>https://academictorrents.com/details/2113a0005716fde62b1201c4a2c846a615b30b66</guid>
<link>https://academictorrents.com/details/2113a0005716fde62b1201c4a2c846a615b30b66</link>
<description/>
<size>386699</size>
</item><item>
<title>Generalization from Observed to Unobserved Features by Clustering</title>
<category>Paper</category>
<infohash>e4d102e59cf6e3c68d0493b66b3c1ad809b11da3</infohash>
<guid>https://academictorrents.com/details/e4d102e59cf6e3c68d0493b66b3c1ad809b11da3</guid>
<link>https://academictorrents.com/details/e4d102e59cf6e3c68d0493b66b3c1ad809b11da3</link>
<description/>
<size>133378</size>
</item><item>
<title>A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning</title>
<category>Paper</category>
<infohash>81d30aec29d668644d9c0b4e64bc41bdefa2d929</infohash>
<guid>https://academictorrents.com/details/81d30aec29d668644d9c0b4e64bc41bdefa2d929</guid>
<link>https://academictorrents.com/details/81d30aec29d668644d9c0b4e64bc41bdefa2d929</link>
<description/>
<size>1319467</size>
</item><item>
<title>Algorithms for Learning Kernels Based on Centered Alignment</title>
<category>Paper</category>
<infohash>174f90e3b63899aa00e90012bd3e44ee5b79efb6</infohash>
<guid>https://academictorrents.com/details/174f90e3b63899aa00e90012bd3e44ee5b79efb6</guid>
<link>https://academictorrents.com/details/174f90e3b63899aa00e90012bd3e44ee5b79efb6</link>
<description/>
<size>266595</size>
</item><item>
<title>Stability Bounds for Stationary -mixing and -mixing Processes</title>
<category>Paper</category>
<infohash>dae7382244d21381a9a098efe009838ec491862b</infohash>
<guid>https://academictorrents.com/details/dae7382244d21381a9a098efe009838ec491862b</guid>
<link>https://academictorrents.com/details/dae7382244d21381a9a098efe009838ec491862b</link>
<description/>
<size>207792</size>
</item><item>
<title>Sparse Spectrum Gaussian Process Regression</title>
<category>Paper</category>
<infohash>510a89459197e881352e8a99ccea659083c3f96d</infohash>
<guid>https://academictorrents.com/details/510a89459197e881352e8a99ccea659083c3f96d</guid>
<link>https://academictorrents.com/details/510a89459197e881352e8a99ccea659083c3f96d</link>
<description/>
<size>178388</size>
</item><item>
<title>Blind Separation of Post-nonlinear Mixtures using Linearizing Transformations and Temporal Decorrelation</title>
<category>Paper</category>
<infohash>2a0497807adea125d3c68cdc6a9c5e96473e8596</infohash>
<guid>https://academictorrents.com/details/2a0497807adea125d3c68cdc6a9c5e96473e8596</guid>
<link>https://academictorrents.com/details/2a0497807adea125d3c68cdc6a9c5e96473e8596</link>
<description/>
<size>918894</size>
</item><item>
<title>Least-Squares Policy Iteration</title>
<category>Paper</category>
<infohash>2f194659087417e235adbaf606c9e170b7b6002c</infohash>
<guid>https://academictorrents.com/details/2f194659087417e235adbaf606c9e170b7b6002c</guid>
<link>https://academictorrents.com/details/2f194659087417e235adbaf606c9e170b7b6002c</link>
<description/>
<size>312970</size>
</item><item>
<title>Random Search for Hyper-Parameter Optimization</title>
<category>Paper</category>
<infohash>57622727b8c7413fba521f635ade9ef36223023c</infohash>
<guid>https://academictorrents.com/details/57622727b8c7413fba521f635ade9ef36223023c</guid>
<link>https://academictorrents.com/details/57622727b8c7413fba521f635ade9ef36223023c</link>
<description/>
<size>728274</size>
</item><item>
<title>A Family of Additive Online Algorithms for Category Ranking</title>
<category>Paper</category>
<infohash>f70052bb09cf26851208fc7b73d4dc8a5b710861</infohash>
<guid>https://academictorrents.com/details/f70052bb09cf26851208fc7b73d4dc8a5b710861</guid>
<link>https://academictorrents.com/details/f70052bb09cf26851208fc7b73d4dc8a5b710861</link>
<description/>
<size>417996</size>
</item><item>
<title>Path Kernels and Multiplicative Updates</title>
<category>Paper</category>
<infohash>fe0610492c0ceb4f4013b0c95c7f349cec766f68</infohash>
<guid>https://academictorrents.com/details/fe0610492c0ceb4f4013b0c95c7f349cec766f68</guid>
<link>https://academictorrents.com/details/fe0610492c0ceb4f4013b0c95c7f349cec766f68</link>
<description/>
<size>131870</size>
</item><item>
<title>Max-margin Classification of Data with Absent Features</title>
<category>Paper</category>
<infohash>10f362e8e840da581437cea0f6cbcbbc430d2887</infohash>
<guid>https://academictorrents.com/details/10f362e8e840da581437cea0f6cbcbbc430d2887</guid>
<link>https://academictorrents.com/details/10f362e8e840da581437cea0f6cbcbbc430d2887</link>
<description/>
<size>292242</size>
</item><item>
<title>Fast and Scalable Local Kernel Machines</title>
<category>Paper</category>
<infohash>058438dc9c8f680864afbb8953e005c92f8da308</infohash>
<guid>https://academictorrents.com/details/058438dc9c8f680864afbb8953e005c92f8da308</guid>
<link>https://academictorrents.com/details/058438dc9c8f680864afbb8953e005c92f8da308</link>
<description/>
<size>425905</size>
</item><item>
<title>Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers</title>
<category>Paper</category>
<infohash>9886fcb86453efd5dd63496426ac3ce46ac5a5dd</infohash>
<guid>https://academictorrents.com/details/9886fcb86453efd5dd63496426ac3ce46ac5a5dd</guid>
<link>https://academictorrents.com/details/9886fcb86453efd5dd63496426ac3ce46ac5a5dd</link>
<description/>
<size>277919</size>
</item><item>
<title>Exact Bayesian Structure Discovery in Bayesian Networks</title>
<category>Paper</category>
<infohash>24381f608b09bb20958184f73c895baf16522f47</infohash>
<guid>https://academictorrents.com/details/24381f608b09bb20958184f73c895baf16522f47</guid>
<link>https://academictorrents.com/details/24381f608b09bb20958184f73c895baf16522f47</link>
<description/>
<size>173898</size>
</item><item>
<title>Iterative Reweighted Algorithms for Matrix Rank Minimization</title>
<category>Paper</category>
<infohash>24dbb7a0e4123b4b6782d4c2c8bcce583de40ff0</infohash>
<guid>https://academictorrents.com/details/24dbb7a0e4123b4b6782d4c2c8bcce583de40ff0</guid>
<link>https://academictorrents.com/details/24dbb7a0e4123b4b6782d4c2c8bcce583de40ff0</link>
<description/>
<size>335721</size>
</item><item>
<title>Classification Using Geometric Level Sets</title>
<category>Paper</category>
<infohash>5237e8c321f52ece1a04f5e377b4c07d5ce85489</infohash>
<guid>https://academictorrents.com/details/5237e8c321f52ece1a04f5e377b4c07d5ce85489</guid>
<link>https://academictorrents.com/details/5237e8c321f52ece1a04f5e377b4c07d5ce85489</link>
<description/>
<size>1036107</size>
</item><item>
<title>Mixability is Bayes Risk Curvature Relative to Log Loss</title>
<category>Paper</category>
<infohash>aa86dcd4f400d55d2bbf9b3fe3c2ca5d3c177703</infohash>
<guid>https://academictorrents.com/details/aa86dcd4f400d55d2bbf9b3fe3c2ca5d3c177703</guid>
<link>https://academictorrents.com/details/aa86dcd4f400d55d2bbf9b3fe3c2ca5d3c177703</link>
<description/>
<size>429797</size>
</item><item>
<title>Consistent Nonparametric Tests of Independence</title>
<category>Paper</category>
<infohash>124c06f5f90001f63019dd5b2047d275a34bfc95</infohash>
<guid>https://academictorrents.com/details/124c06f5f90001f63019dd5b2047d275a34bfc95</guid>
<link>https://academictorrents.com/details/124c06f5f90001f63019dd5b2047d275a34bfc95</link>
<description/>
<size>369959</size>
</item><item>
<title>Probability Estimates for Multi-class Classification by Pairwise Coupling</title>
<category>Paper</category>
<infohash>07087e767df537c2df206dddb799d20abd627c22</infohash>
<guid>https://academictorrents.com/details/07087e767df537c2df206dddb799d20abd627c22</guid>
<link>https://academictorrents.com/details/07087e767df537c2df206dddb799d20abd627c22</link>
<description/>
<size>501956</size>
</item><item>
<title>Pattern for Python</title>
<category>Paper</category>
<infohash>fde4c9025c1cdba3606c63f5a70bc313c71d46a0</infohash>
<guid>https://academictorrents.com/details/fde4c9025c1cdba3606c63f5a70bc313c71d46a0</guid>
<link>https://academictorrents.com/details/fde4c9025c1cdba3606c63f5a70bc313c71d46a0</link>
<description/>
<size>94192</size>
</item><item>
<title>Linear Fitted-Q Iteration with Multiple Reward Functions</title>
<category>Paper</category>
<infohash>ab524298f818565f7b4d711895cc9f68cd2d442b</infohash>
<guid>https://academictorrents.com/details/ab524298f818565f7b4d711895cc9f68cd2d442b</guid>
<link>https://academictorrents.com/details/ab524298f818565f7b4d711895cc9f68cd2d442b</link>
<description/>
<size>1296091</size>
</item><item>
<title>Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso</title>
<category>Paper</category>
<infohash>83e025f04f12b8ede210a6e6de649563257bcd85</infohash>
<guid>https://academictorrents.com/details/83e025f04f12b8ede210a6e6de649563257bcd85</guid>
<link>https://academictorrents.com/details/83e025f04f12b8ede210a6e6de649563257bcd85</link>
<description/>
<size>911768</size>
</item><item>
<title>On Nearest-Neighbor Error-Correcting Output Codes with Application to All-Pairs Multiclass Support Vector Machines</title>
<category>Paper</category>
<infohash>0d0217867c557ff185fc893b6cf9b460c3580724</infohash>
<guid>https://academictorrents.com/details/0d0217867c557ff185fc893b6cf9b460c3580724</guid>
<link>https://academictorrents.com/details/0d0217867c557ff185fc893b6cf9b460c3580724</link>
<description/>
<size>161921</size>
</item><item>
<title>Topology Selection in Graphical Models of Autoregressive Processes</title>
<category>Paper</category>
<infohash>74af429ea8a9a70fc6e8628c967415b6b92b5084</infohash>
<guid>https://academictorrents.com/details/74af429ea8a9a70fc6e8628c967415b6b92b5084</guid>
<link>https://academictorrents.com/details/74af429ea8a9a70fc6e8628c967415b6b92b5084</link>
<description/>
<size>324793</size>
</item><item>
<title>Preference Elicitation and Query Learning (Special Topic on Learning Theory)</title>
<category>Paper</category>
<infohash>1101a44824c650a0db007a929a626d81cdfcf5c6</infohash>
<guid>https://academictorrents.com/details/1101a44824c650a0db007a929a626d81cdfcf5c6</guid>
<link>https://academictorrents.com/details/1101a44824c650a0db007a929a626d81cdfcf5c6</link>
<description/>
<size>208635</size>
</item><item>
<title>Regularized Bundle Methods for Convex and Non-Convex Risks</title>
<category>Paper</category>
<infohash>bbca1f9f0daac4327e0c4f6ad9e889b2b11fa6a9</infohash>
<guid>https://academictorrents.com/details/bbca1f9f0daac4327e0c4f6ad9e889b2b11fa6a9</guid>
<link>https://academictorrents.com/details/bbca1f9f0daac4327e0c4f6ad9e889b2b11fa6a9</link>
<description/>
<size>576405</size>
</item><item>
<title>Bouligand Derivatives and Robustness of Support Vector Machines for Regression</title>
<category>Paper</category>
<infohash>6716583910ceedf4985d3f1fd4e5fb6b8f53a242</infohash>
<guid>https://academictorrents.com/details/6716583910ceedf4985d3f1fd4e5fb6b8f53a242</guid>
<link>https://academictorrents.com/details/6716583910ceedf4985d3f1fd4e5fb6b8f53a242</link>
<description/>
<size>531564</size>
</item><item>
<title>GPLP: A Local and Parallel Computation Toolbox for Gaussian Process Regression</title>
<category>Paper</category>
<infohash>41860ce928ae47eb50b7dcd8170871277d2e1efa</infohash>
<guid>https://academictorrents.com/details/41860ce928ae47eb50b7dcd8170871277d2e1efa</guid>
<link>https://academictorrents.com/details/41860ce928ae47eb50b7dcd8170871277d2e1efa</link>
<description/>
<size>43282</size>
</item><item>
<title>Local Causal and Markov Blanket Induction for Causal Discovery and Feature Selection for Classification Part II: Analysis and Extensions</title>
<category>Paper</category>
<infohash>644188fbca51acc28a99ef5c6b5fc18e147e30e7</infohash>
<guid>https://academictorrents.com/details/644188fbca51acc28a99ef5c6b5fc18e147e30e7</guid>
<link>https://academictorrents.com/details/644188fbca51acc28a99ef5c6b5fc18e147e30e7</link>
<description/>
<size>1937936</size>
</item><item>
<title>Bayes Point Machines (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>d1d353fe6c9e316ef89dc78a499bba2d2b7bb4ec</infohash>
<guid>https://academictorrents.com/details/d1d353fe6c9e316ef89dc78a499bba2d2b7bb4ec</guid>
<link>https://academictorrents.com/details/d1d353fe6c9e316ef89dc78a499bba2d2b7bb4ec</link>
<description/>
<size>977426</size>
</item><item>
<title>Hubs in Space: Popular Nearest Neighbors in High-Dimensional Data</title>
<category>Paper</category>
<infohash>f0c816214fba191eb864acb4bc1139920972b00b</infohash>
<guid>https://academictorrents.com/details/f0c816214fba191eb864acb4bc1139920972b00b</guid>
<link>https://academictorrents.com/details/f0c816214fba191eb864acb4bc1139920972b00b</link>
<description/>
<size>1088886</size>
</item><item>
<title>Graph Kernels</title>
<category>Paper</category>
<infohash>902c20ad640a17526110f45a87456118562f8fbb</infohash>
<guid>https://academictorrents.com/details/902c20ad640a17526110f45a87456118562f8fbb</guid>
<link>https://academictorrents.com/details/902c20ad640a17526110f45a87456118562f8fbb</link>
<description/>
<size>1564895</size>
</item><item>
<title>Large-scale Linear Support Vector Regression</title>
<category>Paper</category>
<infohash>b8af5361e7ddba127c6c19c56e49b8b4257b5c37</infohash>
<guid>https://academictorrents.com/details/b8af5361e7ddba127c6c19c56e49b8b4257b5c37</guid>
<link>https://academictorrents.com/details/b8af5361e7ddba127c6c19c56e49b8b4257b5c37</link>
<description/>
<size>1080182</size>
</item><item>
<title>A Geometric Approach to Sample Compression</title>
<category>Paper</category>
<infohash>83a478eded8871fdc9b9f5d6f04c3e62345d6250</infohash>
<guid>https://academictorrents.com/details/83a478eded8871fdc9b9f5d6f04c3e62345d6250</guid>
<link>https://academictorrents.com/details/83a478eded8871fdc9b9f5d6f04c3e62345d6250</link>
<description/>
<size>501023</size>
</item><item>
<title>A Topic Modeling Toolbox Using Belief Propagation</title>
<category>Paper</category>
<infohash>9e9911232d0f2f7d5a4b87b1faa2411107786266</infohash>
<guid>https://academictorrents.com/details/9e9911232d0f2f7d5a4b87b1faa2411107786266</guid>
<link>https://academictorrents.com/details/9e9911232d0f2f7d5a4b87b1faa2411107786266</link>
<description/>
<size>61471</size>
</item><item>
<title>A Generalized Path Integral Control Approach to Reinforcement Learning</title>
<category>Paper</category>
<infohash>0142723bea471e43f141538dacf4ed809e1756c6</infohash>
<guid>https://academictorrents.com/details/0142723bea471e43f141538dacf4ed809e1756c6</guid>
<link>https://academictorrents.com/details/0142723bea471e43f141538dacf4ed809e1756c6</link>
<description/>
<size>1090353</size>
</item><item>
<title>Word-Sequence Kernels</title>
<category>Paper</category>
<infohash>dfd1871e922ff4c2811882e3d719ea425cd189aa</infohash>
<guid>https://academictorrents.com/details/dfd1871e922ff4c2811882e3d719ea425cd189aa</guid>
<link>https://academictorrents.com/details/dfd1871e922ff4c2811882e3d719ea425cd189aa</link>
<description/>
<size>13792</size>
</item><item>
<title>Characterization, Stability and Convergence of Hierarchical Clustering Methods</title>
<category>Paper</category>
<infohash>043cafaa559a5088f491fd6d29acadef767d252f</infohash>
<guid>https://academictorrents.com/details/043cafaa559a5088f491fd6d29acadef767d252f</guid>
<link>https://academictorrents.com/details/043cafaa559a5088f491fd6d29acadef767d252f</link>
<description/>
<size>857225</size>
</item><item>
<title>Image Denoising with Kernels Based on Natural Image Relations</title>
<category>Paper</category>
<infohash>722798b9b0b8e12eec206ac3bf2844bceb6ab60f</infohash>
<guid>https://academictorrents.com/details/722798b9b0b8e12eec206ac3bf2844bceb6ab60f</guid>
<link>https://academictorrents.com/details/722798b9b0b8e12eec206ac3bf2844bceb6ab60f</link>
<description/>
<size>1767344</size>
</item><item>
<title>Expectation Truncation and the Benefits of Preselection In Training Generative Models</title>
<category>Paper</category>
<infohash>a7b797f1c372cb39ec23fd49fca4a82bf0e204ad</infohash>
<guid>https://academictorrents.com/details/a7b797f1c372cb39ec23fd49fca4a82bf0e204ad</guid>
<link>https://academictorrents.com/details/a7b797f1c372cb39ec23fd49fca4a82bf0e204ad</link>
<description/>
<size>1438325</size>
</item><item>
<title>New Techniques for Disambiguation in Natural Language and Their Application to Biological Text</title>
<category>Paper</category>
<infohash>03476ad6223764d02fd63fc63e999d3c021b2840</infohash>
<guid>https://academictorrents.com/details/03476ad6223764d02fd63fc63e999d3c021b2840</guid>
<link>https://academictorrents.com/details/03476ad6223764d02fd63fc63e999d3c021b2840</link>
<description/>
<size>274564</size>
</item><item>
<title>Prior Knowledge and Preferential Structures in Gradient Descent Learning Algorithms</title>
<category>Paper</category>
<infohash>00d5583c3d2cef10043727f3fe893ee0d7da8591</infohash>
<guid>https://academictorrents.com/details/00d5583c3d2cef10043727f3fe893ee0d7da8591</guid>
<link>https://academictorrents.com/details/00d5583c3d2cef10043727f3fe893ee0d7da8591</link>
<description/>
<size>486620</size>
</item><item>
<title>Using Contextual Representations to Efficiently Learn Context-Free Languages</title>
<category>Paper</category>
<infohash>3855a22689cf7ff8aa35e59fae1a7dd36fc6943f</infohash>
<guid>https://academictorrents.com/details/3855a22689cf7ff8aa35e59fae1a7dd36fc6943f</guid>
<link>https://academictorrents.com/details/3855a22689cf7ff8aa35e59fae1a7dd36fc6943f</link>
<description/>
<size>356131</size>
</item><item>
<title>Sparse Semi-supervised Learning Using Conjugate Functions</title>
<category>Paper</category>
<infohash>cddb9ae68147ba451d549738f0bfef2786237486</infohash>
<guid>https://academictorrents.com/details/cddb9ae68147ba451d549738f0bfef2786237486</guid>
<link>https://academictorrents.com/details/cddb9ae68147ba451d549738f0bfef2786237486</link>
<description/>
<size>331420</size>
</item><item>
<title>An Extension on "Statistical Comparisons of Classifiers over Multiple Data Sets" for all Pairwise Comparisons</title>
<category>Paper</category>
<infohash>94ce5e5953a7438ff6e792ff93f22aef1ce1c02f</infohash>
<guid>https://academictorrents.com/details/94ce5e5953a7438ff6e792ff93f22aef1ce1c02f</guid>
<link>https://academictorrents.com/details/94ce5e5953a7438ff6e792ff93f22aef1ce1c02f</link>
<description/>
<size>343897</size>
</item><item>
<title>Breaking the Curse of Kernelization: Budgeted Stochastic Gradient Descent for Large-Scale SVM Training</title>
<category>Paper</category>
<infohash>dd455fdfb4bc66e32547d59dba4e95ef01bac8aa</infohash>
<guid>https://academictorrents.com/details/dd455fdfb4bc66e32547d59dba4e95ef01bac8aa</guid>
<link>https://academictorrents.com/details/dd455fdfb4bc66e32547d59dba4e95ef01bac8aa</link>
<description/>
<size>304579</size>
</item><item>
<title>Task Clustering and Gating for Bayesian Multitask Learning</title>
<category>Paper</category>
<infohash>6120c5133b7aa90e32951c74015d5edbc450788e</infohash>
<guid>https://academictorrents.com/details/6120c5133b7aa90e32951c74015d5edbc450788e</guid>
<link>https://academictorrents.com/details/6120c5133b7aa90e32951c74015d5edbc450788e</link>
<description/>
<size>285467</size>
</item><item>
<title>A Fast Algorithm for Joint Diagonalization with Non-orthogonal Transformations and its Application to Blind Source Separation</title>
<category>Paper</category>
<infohash>1a66f047252e0beb2526052b1ec34ec4a291dc03</infohash>
<guid>https://academictorrents.com/details/1a66f047252e0beb2526052b1ec34ec4a291dc03</guid>
<link>https://academictorrents.com/details/1a66f047252e0beb2526052b1ec34ec4a291dc03</link>
<description/>
<size>2338783</size>
</item><item>
<title>An Efficient Explanation of Individual Classifications using Game Theory</title>
<category>Paper</category>
<infohash>3397ba9d22c9d32db2a34cb7f54cd42af8d81594</infohash>
<guid>https://academictorrents.com/details/3397ba9d22c9d32db2a34cb7f54cd42af8d81594</guid>
<link>https://academictorrents.com/details/3397ba9d22c9d32db2a34cb7f54cd42af8d81594</link>
<description/>
<size>690800</size>
</item><item>
<title>A Recursive Method for Structural Learning of Directed Acyclic Graphs</title>
<category>Paper</category>
<infohash>1639f8f3024d3007cb73b229cfea66fa561e3f2e</infohash>
<guid>https://academictorrents.com/details/1639f8f3024d3007cb73b229cfea66fa561e3f2e</guid>
<link>https://academictorrents.com/details/1639f8f3024d3007cb73b229cfea66fa561e3f2e</link>
<description/>
<size>986243</size>
</item><item>
<title>Refinement of Operator-valued Reproducing Kernels</title>
<category>Paper</category>
<infohash>b509b5ce4211a0131fa8473ce008b9820b40763a</infohash>
<guid>https://academictorrents.com/details/b509b5ce4211a0131fa8473ce008b9820b40763a</guid>
<link>https://academictorrents.com/details/b509b5ce4211a0131fa8473ce008b9820b40763a</link>
<description/>
<size>458860</size>
</item><item>
<title>Evidence Contrary to the Statistical View of Boosting</title>
<category>Paper</category>
<infohash>bd0aa8d48ad8b8e504f2a6f961d747743a597919</infohash>
<guid>https://academictorrents.com/details/bd0aa8d48ad8b8e504f2a6f961d747743a597919</guid>
<link>https://academictorrents.com/details/bd0aa8d48ad8b8e504f2a6f961d747743a597919</link>
<description/>
<size>973130</size>
</item><item>
<title>Approximate Tree Kernels</title>
<category>Paper</category>
<infohash>33c558a7bdaca3d68c9009a4789c23c370f54d4b</infohash>
<guid>https://academictorrents.com/details/33c558a7bdaca3d68c9009a4789c23c370f54d4b</guid>
<link>https://academictorrents.com/details/33c558a7bdaca3d68c9009a4789c23c370f54d4b</link>
<description/>
<size>373352</size>
</item><item>
<title>Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity</title>
<category>Paper</category>
<infohash>ce87912ab6bf6bf6898f95309f0a21564a85091e</infohash>
<guid>https://academictorrents.com/details/ce87912ab6bf6bf6898f95309f0a21564a85091e</guid>
<link>https://academictorrents.com/details/ce87912ab6bf6bf6898f95309f0a21564a85091e</link>
<description/>
<size>502646</size>
</item><item>
<title>Tree Induction vs. Logistic Regression: A Learning-Curve Analysis</title>
<category>Paper</category>
<infohash>56e25fd7bb45197ace4a3cb3f6493878d1076924</infohash>
<guid>https://academictorrents.com/details/56e25fd7bb45197ace4a3cb3f6493878d1076924</guid>
<link>https://academictorrents.com/details/56e25fd7bb45197ace4a3cb3f6493878d1076924</link>
<description/>
<size>309924</size>
</item><item>
<title>Bayesian Mixed-Effects Inference on Classification Performance in Hierarchical Data Sets</title>
<category>Paper</category>
<infohash>2e4f939686b1ee3662c375137ee2500d51308deb</infohash>
<guid>https://academictorrents.com/details/2e4f939686b1ee3662c375137ee2500d51308deb</guid>
<link>https://academictorrents.com/details/2e4f939686b1ee3662c375137ee2500d51308deb</link>
<description/>
<size>1695411</size>
</item><item>
<title>Online Learning in the Embedded Manifold of Low-rank Matrices</title>
<category>Paper</category>
<infohash>6e74f34f9f7d264c65c7c5abab71bde6dfe3596a</infohash>
<guid>https://academictorrents.com/details/6e74f34f9f7d264c65c7c5abab71bde6dfe3596a</guid>
<link>https://academictorrents.com/details/6e74f34f9f7d264c65c7c5abab71bde6dfe3596a</link>
<description/>
<size>337098</size>
</item><item>
<title>Active Learning via Perfect Selective Classification</title>
<category>Paper</category>
<infohash>4df9c8a69217a388ad08e131f9f3135d1bfa0678</infohash>
<guid>https://academictorrents.com/details/4df9c8a69217a388ad08e131f9f3135d1bfa0678</guid>
<link>https://academictorrents.com/details/4df9c8a69217a388ad08e131f9f3135d1bfa0678</link>
<description/>
<size>194656</size>
</item><item>
<title>On Online Learning of Decision Lists</title>
<category>Paper</category>
<infohash>a6b41b30aaeebe94acc7ed4f4fa59f11f8fc8653</infohash>
<guid>https://academictorrents.com/details/a6b41b30aaeebe94acc7ed4f4fa59f11f8fc8653</guid>
<link>https://academictorrents.com/details/a6b41b30aaeebe94acc7ed4f4fa59f11f8fc8653</link>
<description/>
<size>523749</size>
</item><item>
<title>Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality</title>
<category>Paper</category>
<infohash>76cf546bf74aa2b8fac63cd4d8fa28b32a2db40f</infohash>
<guid>https://academictorrents.com/details/76cf546bf74aa2b8fac63cd4d8fa28b32a2db40f</guid>
<link>https://academictorrents.com/details/76cf546bf74aa2b8fac63cd4d8fa28b32a2db40f</link>
<description/>
<size>256723</size>
</item><item>
<title>Distributional Scaling: An Algorithm for Structure-Preserving Embedding of Metric and Nonmetric Spaces</title>
<category>Paper</category>
<infohash>412dfcfddc78602cb2c06cf7eed5024caacceb39</infohash>
<guid>https://academictorrents.com/details/412dfcfddc78602cb2c06cf7eed5024caacceb39</guid>
<link>https://academictorrents.com/details/412dfcfddc78602cb2c06cf7eed5024caacceb39</link>
<description/>
<size>563080</size>
</item><item>
<title>Bundle Methods for Regularized Risk Minimization</title>
<category>Paper</category>
<infohash>50af39c04117a844d8f46ac37602d26955d8702a</infohash>
<guid>https://academictorrents.com/details/50af39c04117a844d8f46ac37602d26955d8702a</guid>
<link>https://academictorrents.com/details/50af39c04117a844d8f46ac37602d26955d8702a</link>
<description/>
<size>1836816</size>
</item><item>
<title>High-dimensional Variable Selection with Sparse Random Projections: Measurement Sparsity and Statistical Efficiency</title>
<category>Paper</category>
<infohash>274969581d5f4d221b266d6c6b67acbfc491762c</infohash>
<guid>https://academictorrents.com/details/274969581d5f4d221b266d6c6b67acbfc491762c</guid>
<link>https://academictorrents.com/details/274969581d5f4d221b266d6c6b67acbfc491762c</link>
<description/>
<size>205180</size>
</item><item>
<title>A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification</title>
<category>Paper</category>
<infohash>e0342574371245947e145f49125d247966c42c76</infohash>
<guid>https://academictorrents.com/details/e0342574371245947e145f49125d247966c42c76</guid>
<link>https://academictorrents.com/details/e0342574371245947e145f49125d247966c42c76</link>
<description/>
<size>6473660</size>
</item><item>
<title>Selective Rademacher Penalization and Reduced Error Pruning of Decision Trees</title>
<category>Paper</category>
<infohash>16e0cd092149a1d0d886146f917b411ee9bfe623</infohash>
<guid>https://academictorrents.com/details/16e0cd092149a1d0d886146f917b411ee9bfe623</guid>
<link>https://academictorrents.com/details/16e0cd092149a1d0d886146f917b411ee9bfe623</link>
<description/>
<size>129905</size>
</item><item>
<title>Selective Sampling and Active Learning from Single and Multiple Teachers</title>
<category>Paper</category>
<infohash>fb08d84af039c318a50f8406cd679ad90ba2d758</infohash>
<guid>https://academictorrents.com/details/fb08d84af039c318a50f8406cd679ad90ba2d758</guid>
<link>https://academictorrents.com/details/fb08d84af039c318a50f8406cd679ad90ba2d758</link>
<description/>
<size>364201</size>
</item><item>
<title>Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance</title>
<category>Paper</category>
<infohash>338e2e4bc75b07104e73e11440ca633eb144ce7b</infohash>
<guid>https://academictorrents.com/details/338e2e4bc75b07104e73e11440ca633eb144ce7b</guid>
<link>https://academictorrents.com/details/338e2e4bc75b07104e73e11440ca633eb144ce7b</link>
<description/>
<size>240549</size>
</item><item>
<title>Learning Bounded Treewidth Bayesian Networks</title>
<category>Paper</category>
<infohash>5ef14ebb31914e28d839ce14a842d455fcdd6a30</infohash>
<guid>https://academictorrents.com/details/5ef14ebb31914e28d839ce14a842d455fcdd6a30</guid>
<link>https://academictorrents.com/details/5ef14ebb31914e28d839ce14a842d455fcdd6a30</link>
<description/>
<size>3179328</size>
</item><item>
<title>Consistent Model Selection Criteria on High Dimensions</title>
<category>Paper</category>
<infohash>f1f26d26cf9b7646221ac32cfd431bd9acf54cde</infohash>
<guid>https://academictorrents.com/details/f1f26d26cf9b7646221ac32cfd431bd9acf54cde</guid>
<link>https://academictorrents.com/details/f1f26d26cf9b7646221ac32cfd431bd9acf54cde</link>
<description/>
<size>147586</size>
</item><item>
<title>Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning</title>
<category>Paper</category>
<infohash>b0da46a4d6018f022b5e22af6cd2b546e2d7aca4</infohash>
<guid>https://academictorrents.com/details/b0da46a4d6018f022b5e22af6cd2b546e2d7aca4</guid>
<link>https://academictorrents.com/details/b0da46a4d6018f022b5e22af6cd2b546e2d7aca4</link>
<description/>
<size>324360</size>
</item><item>
<title>Sampling Methods for the Nystrm Method</title>
<category>Paper</category>
<infohash>be9f7827c593f14f5d2cdb28d395d90e867a29b4</infohash>
<guid>https://academictorrents.com/details/be9f7827c593f14f5d2cdb28d395d90e867a29b4</guid>
<link>https://academictorrents.com/details/be9f7827c593f14f5d2cdb28d395d90e867a29b4</link>
<description/>
<size>210917</size>
</item><item>
<title>Computable Shell Decomposition Bounds</title>
<category>Paper</category>
<infohash>3f04847ac68a3ce462ee82f5d06e8147c88f3ea0</infohash>
<guid>https://academictorrents.com/details/3f04847ac68a3ce462ee82f5d06e8147c88f3ea0</guid>
<link>https://academictorrents.com/details/3f04847ac68a3ce462ee82f5d06e8147c88f3ea0</link>
<description/>
<size>226901</size>
</item><item>
<title>Learning Non-Stationary Dynamic Bayesian Networks</title>
<category>Paper</category>
<infohash>657a539d78ca85231a4c30b7779e7f777d3e00ac</infohash>
<guid>https://academictorrents.com/details/657a539d78ca85231a4c30b7779e7f777d3e00ac</guid>
<link>https://academictorrents.com/details/657a539d78ca85231a4c30b7779e7f777d3e00ac</link>
<description/>
<size>1875815</size>
</item><item>
<title>Posterior Regularization for Structured Latent Variable Models</title>
<category>Paper</category>
<infohash>2b305b61748592f3960e5164c7a75e679acc48a3</infohash>
<guid>https://academictorrents.com/details/2b305b61748592f3960e5164c7a75e679acc48a3</guid>
<link>https://academictorrents.com/details/2b305b61748592f3960e5164c7a75e679acc48a3</link>
<description/>
<size>1181939</size>
</item><item>
<title>Learning Algorithms for the Classification Restricted Boltzmann Machine</title>
<category>Paper</category>
<infohash>7111107e9b410dcb938243037b1c2cab411ae0d0</infohash>
<guid>https://academictorrents.com/details/7111107e9b410dcb938243037b1c2cab411ae0d0</guid>
<link>https://academictorrents.com/details/7111107e9b410dcb938243037b1c2cab411ae0d0</link>
<description/>
<size>369756</size>
</item><item>
<title>Coherence Functions with Applications in Large-Margin Classification Methods</title>
<category>Paper</category>
<infohash>ee7e4e81f3992cbd660a706564c90dfbe7600ed1</infohash>
<guid>https://academictorrents.com/details/ee7e4e81f3992cbd660a706564c90dfbe7600ed1</guid>
<link>https://academictorrents.com/details/ee7e4e81f3992cbd660a706564c90dfbe7600ed1</link>
<description/>
<size>345578</size>
</item><item>
<title>On Spectral Learning</title>
<category>Paper</category>
<infohash>c3967c3de128b6bfa1800879d4378495b0bc4032</infohash>
<guid>https://academictorrents.com/details/c3967c3de128b6bfa1800879d4378495b0bc4032</guid>
<link>https://academictorrents.com/details/c3967c3de128b6bfa1800879d4378495b0bc4032</link>
<description/>
<size>152333</size>
</item><item>
<title>Semi-Supervised Novelty Detection</title>
<category>Paper</category>
<infohash>9fdd1dd5190e1360c2173cab897ebfc26480e3dd</infohash>
<guid>https://academictorrents.com/details/9fdd1dd5190e1360c2173cab897ebfc26480e3dd</guid>
<link>https://academictorrents.com/details/9fdd1dd5190e1360c2173cab897ebfc26480e3dd</link>
<description/>
<size>291502</size>
</item><item>
<title>Metric and Kernel Learning Using a Linear Transformation</title>
<category>Paper</category>
<infohash>5dec4c22172cf72a9ecf73b43d563e969ba40ed7</infohash>
<guid>https://academictorrents.com/details/5dec4c22172cf72a9ecf73b43d563e969ba40ed7</guid>
<link>https://academictorrents.com/details/5dec4c22172cf72a9ecf73b43d563e969ba40ed7</link>
<description/>
<size>473906</size>
</item><item>
<title>Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization</title>
<category>Paper</category>
<infohash>fce657527452309b8424f8064c66ffe171d3c851</infohash>
<guid>https://academictorrents.com/details/fce657527452309b8424f8064c66ffe171d3c851</guid>
<link>https://academictorrents.com/details/fce657527452309b8424f8064c66ffe171d3c851</link>
<description/>
<size>2908147</size>
</item><item>
<title>Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers</title>
<category>Paper</category>
<infohash>1b022ef4ae73fcab4ec419909171e8b337615d0d</infohash>
<guid>https://academictorrents.com/details/1b022ef4ae73fcab4ec419909171e8b337615d0d</guid>
<link>https://academictorrents.com/details/1b022ef4ae73fcab4ec419909171e8b337615d0d</link>
<description/>
<size>649077</size>
</item><item>
<title>FastInf: An Efficient Approximate Inference Library</title>
<category>Paper</category>
<infohash>0d6ae43319679a7f2d46371b04e21d1a6b18595c</infohash>
<guid>https://academictorrents.com/details/0d6ae43319679a7f2d46371b04e21d1a6b18595c</guid>
<link>https://academictorrents.com/details/0d6ae43319679a7f2d46371b04e21d1a6b18595c</link>
<description/>
<size>32598</size>
</item><item>
<title>Learning Instance-Specific Predictive Models</title>
<category>Paper</category>
<infohash>6d053654c5ef917bfa57f520a9960678b0bf87d4</infohash>
<guid>https://academictorrents.com/details/6d053654c5ef917bfa57f520a9960678b0bf87d4</guid>
<link>https://academictorrents.com/details/6d053654c5ef917bfa57f520a9960678b0bf87d4</link>
<description/>
<size>474953</size>
</item><item>
<title>Probability Product Kernels (Special Topic on Learning Theory)</title>
<category>Paper</category>
<infohash>99298a049a113961f3b4cba160ea30c73091c312</infohash>
<guid>https://academictorrents.com/details/99298a049a113961f3b4cba160ea30c73091c312</guid>
<link>https://academictorrents.com/details/99298a049a113961f3b4cba160ea30c73091c312</link>
<description/>
<size>742632</size>
</item><item>
<title>Fast Approximation of Matrix Coherence and Statistical Leverage</title>
<category>Paper</category>
<infohash>20cb8b1f84a0405538ca7ea30ce8618f749ccef2</infohash>
<guid>https://academictorrents.com/details/20cb8b1f84a0405538ca7ea30ce8618f749ccef2</guid>
<link>https://academictorrents.com/details/20cb8b1f84a0405538ca7ea30ce8618f749ccef2</link>
<description/>
<size>285257</size>
</item><item>
<title>Introduction to the Special Issue on Machine Learning Methods for Text and Images</title>
<category>Paper</category>
<infohash>69af795cc5dd6387b7561fd20ff72615d802bb28</infohash>
<guid>https://academictorrents.com/details/69af795cc5dd6387b7561fd20ff72615d802bb28</guid>
<link>https://academictorrents.com/details/69af795cc5dd6387b7561fd20ff72615d802bb28</link>
<description/>
<size>346302</size>
</item><item>
<title>Fusion of Domain Knowledge with Data for Structural Learning in Object Oriented Domains</title>
<category>Paper</category>
<infohash>f1ecf4fae4c705c7db6c7731b896c1f896f58d41</infohash>
<guid>https://academictorrents.com/details/f1ecf4fae4c705c7db6c7731b896c1f896f58d41</guid>
<link>https://academictorrents.com/details/f1ecf4fae4c705c7db6c7731b896c1f896f58d41</link>
<description/>
<size>256291</size>
</item><item>
<title>A Geometric Approach to Multi-Criterion Reinforcement Learning</title>
<category>Paper</category>
<infohash>8dee57dd28d7a62ee2e35c68a9b8f04be69bcc69</infohash>
<guid>https://academictorrents.com/details/8dee57dd28d7a62ee2e35c68a9b8f04be69bcc69</guid>
<link>https://academictorrents.com/details/8dee57dd28d7a62ee2e35c68a9b8f04be69bcc69</link>
<description/>
<size>309350</size>
</item><item>
<title>Restricted Eigenvalue Properties for Correlated Gaussian Designs</title>
<category>Paper</category>
<infohash>674c1f85278061a46d1a4a61a5234983c2abdbcc</infohash>
<guid>https://academictorrents.com/details/674c1f85278061a46d1a4a61a5234983c2abdbcc</guid>
<link>https://academictorrents.com/details/674c1f85278061a46d1a4a61a5234983c2abdbcc</link>
<description/>
<size>153447</size>
</item><item>
<title>Blind Source Recovery: A Framework in the State Space</title>
<category>Paper</category>
<infohash>c4f729a7185a1a420bfe12f21db64289345a26d3</infohash>
<guid>https://academictorrents.com/details/c4f729a7185a1a420bfe12f21db64289345a26d3</guid>
<link>https://academictorrents.com/details/c4f729a7185a1a420bfe12f21db64289345a26d3</link>
<description/>
<size>196763</size>
</item><item>
<title>Algorithmic Luckiness</title>
<category>Paper</category>
<infohash>d71eaafbc22e4c039467db80d9c031f54e390627</infohash>
<guid>https://academictorrents.com/details/d71eaafbc22e4c039467db80d9c031f54e390627</guid>
<link>https://academictorrents.com/details/d71eaafbc22e4c039467db80d9c031f54e390627</link>
<description/>
<size>414671</size>
</item><item>
<title>Learning with Mixtures of Trees</title>
<category>Paper</category>
<infohash>d2a046b69cc0f8f04a443f01f6b495e220b05b75</infohash>
<guid>https://academictorrents.com/details/d2a046b69cc0f8f04a443f01f6b495e220b05b75</guid>
<link>https://academictorrents.com/details/d2a046b69cc0f8f04a443f01f6b495e220b05b75</link>
<description/>
<size>347232</size>
</item><item>
<title>A Comparison of the Lasso and Marginal Regression</title>
<category>Paper</category>
<infohash>b6126eb63713f8560bdf000c55e06e9b70570039</infohash>
<guid>https://academictorrents.com/details/b6126eb63713f8560bdf000c55e06e9b70570039</guid>
<link>https://academictorrents.com/details/b6126eb63713f8560bdf000c55e06e9b70570039</link>
<description/>
<size>953292</size>
</item><item>
<title>Algebraic Geometric Comparison of Probability Distributions</title>
<category>Paper</category>
<infohash>a8ed169e525254fe11d75b824f2d1ba95b9e9643</infohash>
<guid>https://academictorrents.com/details/a8ed169e525254fe11d75b824f2d1ba95b9e9643</guid>
<link>https://academictorrents.com/details/a8ed169e525254fe11d75b824f2d1ba95b9e9643</link>
<description/>
<size>534833</size>
</item><item>
<title>Query Strategies for Evading Convex-Inducing Classifiers</title>
<category>Paper</category>
<infohash>f3bdf58e2eea2c35c9066a1e5fcbcd2b62d142e7</infohash>
<guid>https://academictorrents.com/details/f3bdf58e2eea2c35c9066a1e5fcbcd2b62d142e7</guid>
<link>https://academictorrents.com/details/f3bdf58e2eea2c35c9066a1e5fcbcd2b62d142e7</link>
<description/>
<size>374135</size>
</item><item>
<title>Sign Language Recognition using Sub-Units</title>
<category>Paper</category>
<infohash>86bf966e9795896610e0d4609ad4b2ca489e3438</infohash>
<guid>https://academictorrents.com/details/86bf966e9795896610e0d4609ad4b2ca489e3438</guid>
<link>https://academictorrents.com/details/86bf966e9795896610e0d4609ad4b2ca489e3438</link>
<description/>
<size>2388079</size>
</item><item>
<title>Why Does Unsupervised Pre-training Help Deep Learning?</title>
<category>Paper</category>
<infohash>c03f42edc1ceb9f0d4a4fd259ef1ef0803aec192</infohash>
<guid>https://academictorrents.com/details/c03f42edc1ceb9f0d4a4fd259ef1ef0803aec192</guid>
<link>https://academictorrents.com/details/c03f42edc1ceb9f0d4a4fd259ef1ef0803aec192</link>
<description/>
<size>1291944</size>
</item><item>
<title>A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives</title>
<category>Paper</category>
<infohash>0640d358d4e297c0ad855d874d554d83fd5606b4</infohash>
<guid>https://academictorrents.com/details/0640d358d4e297c0ad855d874d554d83fd5606b4</guid>
<link>https://academictorrents.com/details/0640d358d4e297c0ad855d874d554d83fd5606b4</link>
<description/>
<size>1679269</size>
</item><item>
<title>Stability of Density-Based Clustering</title>
<category>Paper</category>
<infohash>fdce3a5899f58bdae0f8a06105576ddc81eb89e1</infohash>
<guid>https://academictorrents.com/details/fdce3a5899f58bdae0f8a06105576ddc81eb89e1</guid>
<link>https://academictorrents.com/details/fdce3a5899f58bdae0f8a06105576ddc81eb89e1</link>
<description/>
<size>2207175</size>
</item><item>
<title>Regularized Principal Manifolds (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>54fde50d7ead3bddc030d1b3c9348d89e698a1d2</infohash>
<guid>https://academictorrents.com/details/54fde50d7ead3bddc030d1b3c9348d89e698a1d2</guid>
<link>https://academictorrents.com/details/54fde50d7ead3bddc030d1b3c9348d89e698a1d2</link>
<description/>
<size>728955</size>
</item><item>
<title>Bias-Variance Analysis of Support Vector Machines for the Development of SVM-Based Ensemble Methods</title>
<category>Paper</category>
<infohash>c741c15c75d92ec8b37cd9b2d3ef157f3acde647</infohash>
<guid>https://academictorrents.com/details/c741c15c75d92ec8b37cd9b2d3ef157f3acde647</guid>
<link>https://academictorrents.com/details/c741c15c75d92ec8b37cd9b2d3ef157f3acde647</link>
<description/>
<size>345177</size>
</item><item>
<title>Subgroup Discovery with CN2-SD</title>
<category>Paper</category>
<infohash>5c7183e8c3f23ead7dcdfc70340120d312e779f9</infohash>
<guid>https://academictorrents.com/details/5c7183e8c3f23ead7dcdfc70340120d312e779f9</guid>
<link>https://academictorrents.com/details/5c7183e8c3f23ead7dcdfc70340120d312e779f9</link>
<description/>
<size>133485</size>
</item><item>
<title>Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications</title>
<category>Paper</category>
<infohash>12dec74bab119eaa23a0ced63349ba6e35cf8656</infohash>
<guid>https://academictorrents.com/details/12dec74bab119eaa23a0ced63349ba6e35cf8656</guid>
<link>https://academictorrents.com/details/12dec74bab119eaa23a0ced63349ba6e35cf8656</link>
<description/>
<size>264275</size>
</item><item>
<title>Consensus-Based Distributed Support Vector Machines</title>
<category>Paper</category>
<infohash>4194b6f476aedf6b101d5e4d3a99159ac9d5c076</infohash>
<guid>https://academictorrents.com/details/4194b6f476aedf6b101d5e4d3a99159ac9d5c076</guid>
<link>https://academictorrents.com/details/4194b6f476aedf6b101d5e4d3a99159ac9d5c076</link>
<description/>
<size>1057788</size>
</item><item>
<title>Non-Sparse Multiple Kernel Fisher Discriminant Analysis</title>
<category>Paper</category>
<infohash>c3eee4a1e43d2ab0d8cf860eb77f3d2af076bd5e</infohash>
<guid>https://academictorrents.com/details/c3eee4a1e43d2ab0d8cf860eb77f3d2af076bd5e</guid>
<link>https://academictorrents.com/details/c3eee4a1e43d2ab0d8cf860eb77f3d2af076bd5e</link>
<description/>
<size>555623</size>
</item><item>
<title>The huge Package for High-dimensional Undirected Graph Estimation in R</title>
<category>Paper</category>
<infohash>b33984a3ffa7a931e34d2af393becbeda77c2bf1</infohash>
<guid>https://academictorrents.com/details/b33984a3ffa7a931e34d2af393becbeda77c2bf1</guid>
<link>https://academictorrents.com/details/b33984a3ffa7a931e34d2af393becbeda77c2bf1</link>
<description/>
<size>339261</size>
</item><item>
<title>Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion</title>
<category>Paper</category>
<infohash>4455093da9243e81e4d23bcef57178a917f6c183</infohash>
<guid>https://academictorrents.com/details/4455093da9243e81e4d23bcef57178a917f6c183</guid>
<link>https://academictorrents.com/details/4455093da9243e81e4d23bcef57178a917f6c183</link>
<description/>
<size>1392544</size>
</item><item>
<title>ML-Flex: A Flexible Toolbox for Performing Classification Analyses In Parallel</title>
<category>Paper</category>
<infohash>204a7af1ec7aafcf1c7a37102d4b353a93620f2d</infohash>
<guid>https://academictorrents.com/details/204a7af1ec7aafcf1c7a37102d4b353a93620f2d</guid>
<link>https://academictorrents.com/details/204a7af1ec7aafcf1c7a37102d4b353a93620f2d</link>
<description/>
<size>37007</size>
</item><item>
<title>MOA: Massive Online Analysis</title>
<category>Paper</category>
<infohash>fa2f814f97e6425ee42f63859c96ff4f80002919</infohash>
<guid>https://academictorrents.com/details/fa2f814f97e6425ee42f63859c96ff4f80002919</guid>
<link>https://academictorrents.com/details/fa2f814f97e6425ee42f63859c96ff4f80002919</link>
<description/>
<size>82214</size>
</item><item>
<title>Dimensionality Reduction via Sparse Support Vector Machines (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>fdb1fb6a9897eb597328eaa81b648bd8fc5d1ec0</infohash>
<guid>https://academictorrents.com/details/fdb1fb6a9897eb597328eaa81b648bd8fc5d1ec0</guid>
<link>https://academictorrents.com/details/fdb1fb6a9897eb597328eaa81b648bd8fc5d1ec0</link>
<description/>
<size>200162</size>
</item><item>
<title>Introduction to the Special Issue on the Fusion of Domain Knowledge with Data for Decision Support</title>
<category>Paper</category>
<infohash>58d212578d6b6432a113f6dd3ef0b1a3c528cf9f</infohash>
<guid>https://academictorrents.com/details/58d212578d6b6432a113f6dd3ef0b1a3c528cf9f</guid>
<link>https://academictorrents.com/details/58d212578d6b6432a113f6dd3ef0b1a3c528cf9f</link>
<description/>
<size>11696</size>
</item><item>
<title>Learning From Crowds</title>
<category>Paper</category>
<infohash>c7e94383a95f855df08531e30bfcb717924baa8b</infohash>
<guid>https://academictorrents.com/details/c7e94383a95f855df08531e30bfcb717924baa8b</guid>
<link>https://academictorrents.com/details/c7e94383a95f855df08531e30bfcb717924baa8b</link>
<description/>
<size>238950</size>
</item><item>
<title>Rate Minimaxity of the Lasso and Dantzig Selector for the lq Loss in lr Balls</title>
<category>Paper</category>
<infohash>ba4ad580df0ca2517cd8ea20551bc2645bbf573f</infohash>
<guid>https://academictorrents.com/details/ba4ad580df0ca2517cd8ea20551bc2645bbf573f</guid>
<link>https://academictorrents.com/details/ba4ad580df0ca2517cd8ea20551bc2645bbf573f</link>
<description/>
<size>201788</size>
</item><item>
<title>SVDFeature: A Toolkit for Feature-based Collaborative Filtering</title>
<category>Paper</category>
<infohash>39fba4740d1db0ce4c2a28ea41c7e1e4d609a728</infohash>
<guid>https://academictorrents.com/details/39fba4740d1db0ce4c2a28ea41c7e1e4d609a728</guid>
<link>https://academictorrents.com/details/39fba4740d1db0ce4c2a28ea41c7e1e4d609a728</link>
<description/>
<size>224993</size>
</item><item>
<title>ICA for Watermarking Digital Images</title>
<category>Paper</category>
<infohash>9f40e4061d1c70a8ef6e0c828c1f7f4f4b64648b</infohash>
<guid>https://academictorrents.com/details/9f40e4061d1c70a8ef6e0c828c1f7f4f4b64648b</guid>
<link>https://academictorrents.com/details/9f40e4061d1c70a8ef6e0c828c1f7f4f4b64648b</link>
<description/>
<size>3443364</size>
</item><item>
<title>A Generative Model for Separating Illumination and Reflectance from Images</title>
<category>Paper</category>
<infohash>58f00d4652cd7b5b5a238c5562a977a39183defa</infohash>
<guid>https://academictorrents.com/details/58f00d4652cd7b5b5a238c5562a977a39183defa</guid>
<link>https://academictorrents.com/details/58f00d4652cd7b5b5a238c5562a977a39183defa</link>
<description/>
<size>511408</size>
</item><item>
<title>Active Clustering of Biological Sequences</title>
<category>Paper</category>
<infohash>9dd1a91e17e2294624f643e3f1073cb26aa39a2c</infohash>
<guid>https://academictorrents.com/details/9dd1a91e17e2294624f643e3f1073cb26aa39a2c</guid>
<link>https://academictorrents.com/details/9dd1a91e17e2294624f643e3f1073cb26aa39a2c</link>
<description/>
<size>332472</size>
</item><item>
<title>Learning Rates for Q-learning</title>
<category>Paper</category>
<infohash>3459786202aa807fde4498d1b85fad02d314d94a</infohash>
<guid>https://academictorrents.com/details/3459786202aa807fde4498d1b85fad02d314d94a</guid>
<link>https://academictorrents.com/details/3459786202aa807fde4498d1b85fad02d314d94a</link>
<description/>
<size>225646</size>
</item><item>
<title>A Rotation Test to Verify Latent Structure</title>
<category>Paper</category>
<infohash>20034c6750ff4bc08205ca738a7a4c8e284c2025</infohash>
<guid>https://academictorrents.com/details/20034c6750ff4bc08205ca738a7a4c8e284c2025</guid>
<link>https://academictorrents.com/details/20034c6750ff4bc08205ca738a7a4c8e284c2025</link>
<description/>
<size>291841</size>
</item><item>
<title>Tracking a Small Set of Experts by Mixing Past Posteriors</title>
<category>Paper</category>
<infohash>31c61e72eab7361db5abfae6e0ba8dfb5140e3a4</infohash>
<guid>https://academictorrents.com/details/31c61e72eab7361db5abfae6e0ba8dfb5140e3a4</guid>
<link>https://academictorrents.com/details/31c61e72eab7361db5abfae6e0ba8dfb5140e3a4</link>
<description/>
<size>64739</size>
</item><item>
<title>Dependency Networks for Inference, Collaborative Filtering, and Data Visualization</title>
<category>Paper</category>
<infohash>d8552c6f745b8000a052c9fcb09635e97bc63521</infohash>
<guid>https://academictorrents.com/details/d8552c6f745b8000a052c9fcb09635e97bc63521</guid>
<link>https://academictorrents.com/details/d8552c6f745b8000a052c9fcb09635e97bc63521</link>
<description/>
<size>294245</size>
</item><item>
<title>Introduction to the Special Issue on Inductive Logic Programming</title>
<category>Paper</category>
<infohash>e5383c8d7ddc65c489aedfd1fbbc73f053c81f87</infohash>
<guid>https://academictorrents.com/details/e5383c8d7ddc65c489aedfd1fbbc73f053c81f87</guid>
<link>https://academictorrents.com/details/e5383c8d7ddc65c489aedfd1fbbc73f053c81f87</link>
<description/>
<size>708772</size>
</item><item>
<title>Oger: Modular Learning Architectures For Large-Scale Sequential Processing</title>
<category>Paper</category>
<infohash>6c1060c9dca64d64c7f633f9a8b03cdf7beed258</infohash>
<guid>https://academictorrents.com/details/6c1060c9dca64d64c7f633f9a8b03cdf7beed258</guid>
<link>https://academictorrents.com/details/6c1060c9dca64d64c7f633f9a8b03cdf7beed258</link>
<description/>
<size>114319</size>
</item><item>
<title>Query Transformations for Improving the Efficiency of ILP Systems</title>
<category>Paper</category>
<infohash>d15b116c906d3995979b99e5f143658e60327e84</infohash>
<guid>https://academictorrents.com/details/d15b116c906d3995979b99e5f143658e60327e84</guid>
<link>https://academictorrents.com/details/d15b116c906d3995979b99e5f143658e60327e84</link>
<description/>
<size>337196</size>
</item><item>
<title>Efficient Algorithms for Universal Portfolios</title>
<category>Paper</category>
<infohash>c44653bbc9769fd294d332795c3f0c85a658617b</infohash>
<guid>https://academictorrents.com/details/c44653bbc9769fd294d332795c3f0c85a658617b</guid>
<link>https://academictorrents.com/details/c44653bbc9769fd294d332795c3f0c85a658617b</link>
<description/>
<size>358863</size>
</item><item>
<title>A Maximum Likelihood Approach to Single-channel Source Separation</title>
<category>Paper</category>
<infohash>aa50815201cdf7090d85b151413fe33009ad4075</infohash>
<guid>https://academictorrents.com/details/aa50815201cdf7090d85b151413fe33009ad4075</guid>
<link>https://academictorrents.com/details/aa50815201cdf7090d85b151413fe33009ad4075</link>
<description/>
<size>738334</size>
</item><item>
<title>Permutation Tests for Studying Classifier Performance</title>
<category>Paper</category>
<infohash>6fc40198ba0ab0706759c533a28001804a377c8b</infohash>
<guid>https://academictorrents.com/details/6fc40198ba0ab0706759c533a28001804a377c8b</guid>
<link>https://academictorrents.com/details/6fc40198ba0ab0706759c533a28001804a377c8b</link>
<description/>
<size>295573</size>
</item><item>
<title>On the Foundations of Noise-free Selective Classification</title>
<category>Paper</category>
<infohash>d6ba8863427fcdd7f8f73de75385ee61c3dbf0e4</infohash>
<guid>https://academictorrents.com/details/d6ba8863427fcdd7f8f73de75385ee61c3dbf0e4</guid>
<link>https://academictorrents.com/details/d6ba8863427fcdd7f8f73de75385ee61c3dbf0e4</link>
<description/>
<size>552991</size>
</item><item>
<title>PyBrain</title>
<category>Paper</category>
<infohash>22aeae981371179668f128d97c62ac9f4435f605</infohash>
<guid>https://academictorrents.com/details/22aeae981371179668f128d97c62ac9f4435f605</guid>
<link>https://academictorrents.com/details/22aeae981371179668f128d97c62ac9f4435f605</link>
<description/>
<size>71138</size>
</item><item>
<title>On Robustness Properties of Convex Risk Minimization Methods for Pattern Recognition</title>
<category>Paper</category>
<infohash>59655ff68200d472c491cc2b8b8112275e3ff3f6</infohash>
<guid>https://academictorrents.com/details/59655ff68200d472c491cc2b8b8112275e3ff3f6</guid>
<link>https://academictorrents.com/details/59655ff68200d472c491cc2b8b8112275e3ff3f6</link>
<description/>
<size>488218</size>
</item><item>
<title>Speedup Learning for Repair-based Search by Identifying Redundant Steps</title>
<category>Paper</category>
<infohash>35f46d5b8f8f76d4e211ecdda423242b6057c382</infohash>
<guid>https://academictorrents.com/details/35f46d5b8f8f76d4e211ecdda423242b6057c382</guid>
<link>https://academictorrents.com/details/35f46d5b8f8f76d4e211ecdda423242b6057c382</link>
<description/>
<size>167449</size>
</item><item>
<title>The em Algorithm for Kernel Matrix Completion with Auxiliary Data</title>
<category>Paper</category>
<infohash>8056737d81e1a9ff89e3b6566a7f69aa37a60e09</infohash>
<guid>https://academictorrents.com/details/8056737d81e1a9ff89e3b6566a7f69aa37a60e09</guid>
<link>https://academictorrents.com/details/8056737d81e1a9ff89e3b6566a7f69aa37a60e09</link>
<description/>
<size>144745</size>
</item><item>
<title>Trading Regret for Efficiency: Online Convex Optimization with Long Term Constraints</title>
<category>Paper</category>
<infohash>ed4dd1fe07fa81d93667cb6c6adf9ce6147f4ed5</infohash>
<guid>https://academictorrents.com/details/ed4dd1fe07fa81d93667cb6c6adf9ce6147f4ed5</guid>
<link>https://academictorrents.com/details/ed4dd1fe07fa81d93667cb6c6adf9ce6147f4ed5</link>
<description/>
<size>230612</size>
</item><item>
<title>Matched Gene Selection and Committee Classifier for Molecular Classification of Heterogeneous Diseases</title>
<category>Paper</category>
<infohash>7f9333199bf3ce69d5e58e7f7cc86489ed4c6c98</infohash>
<guid>https://academictorrents.com/details/7f9333199bf3ce69d5e58e7f7cc86489ed4c6c98</guid>
<link>https://academictorrents.com/details/7f9333199bf3ce69d5e58e7f7cc86489ed4c6c98</link>
<description/>
<size>548809</size>
</item><item>
<title>Classification Methods with Reject Option Based on Convex Risk Minimization</title>
<category>Paper</category>
<infohash>26cdc093869bf1be51b954dec56c5ec2c3fdbc93</infohash>
<guid>https://academictorrents.com/details/26cdc093869bf1be51b954dec56c5ec2c3fdbc93</guid>
<link>https://academictorrents.com/details/26cdc093869bf1be51b954dec56c5ec2c3fdbc93</link>
<description/>
<size>123287</size>
</item><item>
<title>Some Properties of Regularized Kernel Methods</title>
<category>Paper</category>
<infohash>930280d0918a2b8941c035ccf0aa642e8cfe1152</infohash>
<guid>https://academictorrents.com/details/930280d0918a2b8941c035ccf0aa642e8cfe1152</guid>
<link>https://academictorrents.com/details/930280d0918a2b8941c035ccf0aa642e8cfe1152</link>
<description/>
<size>215967</size>
</item><item>
<title>An Exponential Model for Infinite Rankings</title>
<category>Paper</category>
<infohash>6db71aa8d73c2cc65af35047167008da0a746d08</infohash>
<guid>https://academictorrents.com/details/6db71aa8d73c2cc65af35047167008da0a746d08</guid>
<link>https://academictorrents.com/details/6db71aa8d73c2cc65af35047167008da0a746d08</link>
<description/>
<size>392923</size>
</item><item>
<title>Evolving Static Representations for Task Transfer</title>
<category>Paper</category>
<infohash>efdec7ba0c0770923e79325ab0e34c20784d9279</infohash>
<guid>https://academictorrents.com/details/efdec7ba0c0770923e79325ab0e34c20784d9279</guid>
<link>https://academictorrents.com/details/efdec7ba0c0770923e79325ab0e34c20784d9279</link>
<description/>
<size>9964124</size>
</item><item>
<title>RCV1: A New Benchmark Collection for Text Categorization Research</title>
<category>Paper</category>
<infohash>7e86d470f7a4cd0b370c141b7ed6cf6ca7ff170c</infohash>
<guid>https://academictorrents.com/details/7e86d470f7a4cd0b370c141b7ed6cf6ca7ff170c</guid>
<link>https://academictorrents.com/details/7e86d470f7a4cd0b370c141b7ed6cf6ca7ff170c</link>
<description/>
<size>394830</size>
</item><item>
<title>Optimal Solutions for Sparse Principal Component Analysis</title>
<category>Paper</category>
<infohash>f95e296ad0515cd337e57a3df881f37763e54eae</infohash>
<guid>https://academictorrents.com/details/f95e296ad0515cd337e57a3df881f37763e54eae</guid>
<link>https://academictorrents.com/details/f95e296ad0515cd337e57a3df881f37763e54eae</link>
<description/>
<size>308863</size>
</item><item>
<title>Generalization Error Bounds for Bayesian Mixture Algorithms</title>
<category>Paper</category>
<infohash>b859c3d5df15627e2bbce1b49b2ce0060d23a55f</infohash>
<guid>https://academictorrents.com/details/b859c3d5df15627e2bbce1b49b2ce0060d23a55f</guid>
<link>https://academictorrents.com/details/b859c3d5df15627e2bbce1b49b2ce0060d23a55f</link>
<description/>
<size>160178</size>
</item><item>
<title>Model Selection: Beyond the BayesianFrequentist Divide</title>
<category>Paper</category>
<infohash>ae655426e6cb2851f2bc3d75d41ee454acf2ac5a</infohash>
<guid>https://academictorrents.com/details/ae655426e6cb2851f2bc3d75d41ee454acf2ac5a</guid>
<link>https://academictorrents.com/details/ae655426e6cb2851f2bc3d75d41ee454acf2ac5a</link>
<description/>
<size>222720</size>
</item><item>
<title>Relational Learning as Search in a Critical Region</title>
<category>Paper</category>
<infohash>dabb4d1d3f16a74959bc65e6afb90da5c5382d1a</infohash>
<guid>https://academictorrents.com/details/dabb4d1d3f16a74959bc65e6afb90da5c5382d1a</guid>
<link>https://academictorrents.com/details/dabb4d1d3f16a74959bc65e6afb90da5c5382d1a</link>
<description/>
<size>100204</size>
</item><item>
<title>The SHOGUN Machine Learning Toolbox</title>
<category>Paper</category>
<infohash>e210ec5a3ea46b557b4c798321e8712ba6f777e9</infohash>
<guid>https://academictorrents.com/details/e210ec5a3ea46b557b4c798321e8712ba6f777e9</guid>
<link>https://academictorrents.com/details/e210ec5a3ea46b557b4c798321e8712ba6f777e9</link>
<description/>
<size>37239</size>
</item><item>
<title>Regularization Techniques for Learning with Matrices</title>
<category>Paper</category>
<infohash>9886b06e76eca2473eed4df2e7370715c9e2ce13</infohash>
<guid>https://academictorrents.com/details/9886b06e76eca2473eed4df2e7370715c9e2ce13</guid>
<link>https://academictorrents.com/details/9886b06e76eca2473eed4df2e7370715c9e2ce13</link>
<description/>
<size>242012</size>
</item><item>
<title>LIBLINEAR: A Library for Large Linear Classification(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>e20417f5f84f61a5aa948854fb3ca0e833116462</infohash>
<guid>https://academictorrents.com/details/e20417f5f84f61a5aa948854fb3ca0e833116462</guid>
<link>https://academictorrents.com/details/e20417f5f84f61a5aa948854fb3ca0e833116462</link>
<description/>
<size>1430056</size>
</item><item>
<title>Greedy Algorithms for Classification -- Consistency, Convergence Rates, and Adaptivity</title>
<category>Paper</category>
<infohash>a29d48ae0fca43488819fa4e74165a7b69f54536</infohash>
<guid>https://academictorrents.com/details/a29d48ae0fca43488819fa4e74165a7b69f54536</guid>
<link>https://academictorrents.com/details/a29d48ae0fca43488819fa4e74165a7b69f54536</link>
<description/>
<size>245623</size>
</item><item>
<title>Gaussian Processes for Machine Learning (GPML) Toolbox</title>
<category>Paper</category>
<infohash>1574823977be15943c4505c7e51c3b1482f12051</infohash>
<guid>https://academictorrents.com/details/1574823977be15943c4505c7e51c3b1482f12051</guid>
<link>https://academictorrents.com/details/1574823977be15943c4505c7e51c3b1482f12051</link>
<description/>
<size>55052</size>
</item><item>
<title>Discriminative Hierarchical Part-based Models for Human Parsing and Action Recognition</title>
<category>Paper</category>
<infohash>c1ce30cc08e90c43ce7971ff45269699bc8fbfe8</infohash>
<guid>https://academictorrents.com/details/c1ce30cc08e90c43ce7971ff45269699bc8fbfe8</guid>
<link>https://academictorrents.com/details/c1ce30cc08e90c43ce7971ff45269699bc8fbfe8</link>
<description/>
<size>9089811</size>
</item><item>
<title>Kronecker Graphs: An Approach to Modeling Networks</title>
<category>Paper</category>
<infohash>faf09d55b358bb0af3e792b866a9225023afd00a</infohash>
<guid>https://academictorrents.com/details/faf09d55b358bb0af3e792b866a9225023afd00a</guid>
<link>https://academictorrents.com/details/faf09d55b358bb0af3e792b866a9225023afd00a</link>
<description/>
<size>1291957</size>
</item><item>
<title>Integrating a Partial Model into Model Free Reinforcement Learning</title>
<category>Paper</category>
<infohash>a130a36bb43f6138f9d10200d127650dd97d954b</infohash>
<guid>https://academictorrents.com/details/a130a36bb43f6138f9d10200d127650dd97d954b</guid>
<link>https://academictorrents.com/details/a130a36bb43f6138f9d10200d127650dd97d954b</link>
<description/>
<size>328430</size>
</item><item>
<title>A Streaming Parallel Decision Tree Algorithm</title>
<category>Paper</category>
<infohash>6c9fde0154c251f556e2c6f0a674a0105b2f7c74</infohash>
<guid>https://academictorrents.com/details/6c9fde0154c251f556e2c6f0a674a0105b2f7c74</guid>
<link>https://academictorrents.com/details/6c9fde0154c251f556e2c6f0a674a0105b2f7c74</link>
<description/>
<size>512654</size>
</item><item>
<title>The Entire Regularization Path for the Support Vector Machine</title>
<category>Paper</category>
<infohash>604e5a5844751bf804cb2b146eb4c6294efa1bd3</infohash>
<guid>https://academictorrents.com/details/604e5a5844751bf804cb2b146eb4c6294efa1bd3</guid>
<link>https://academictorrents.com/details/604e5a5844751bf804cb2b146eb4c6294efa1bd3</link>
<description/>
<size>184623</size>
</item><item>
<title>Forecasting Web Page Views: Methods and Observations</title>
<category>Paper</category>
<infohash>1f25f4cf7b6c1c3a8284e0b9b3e78b996af9df5c</infohash>
<guid>https://academictorrents.com/details/1f25f4cf7b6c1c3a8284e0b9b3e78b996af9df5c</guid>
<link>https://academictorrents.com/details/1f25f4cf7b6c1c3a8284e0b9b3e78b996af9df5c</link>
<description/>
<size>2499506</size>
</item><item>
<title>Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes</title>
<category>Paper</category>
<infohash>588ba0985c4ccb4026bf3f460d8831468842bd62</infohash>
<guid>https://academictorrents.com/details/588ba0985c4ccb4026bf3f460d8831468842bd62</guid>
<link>https://academictorrents.com/details/588ba0985c4ccb4026bf3f460d8831468842bd62</link>
<description/>
<size>553636</size>
</item><item>
<title>On Learning with Integral Operators</title>
<category>Paper</category>
<infohash>36cfb59ec3fe0fccd5171f44b1c1a93a40941001</infohash>
<guid>https://academictorrents.com/details/36cfb59ec3fe0fccd5171f44b1c1a93a40941001</guid>
<link>https://academictorrents.com/details/36cfb59ec3fe0fccd5171f44b1c1a93a40941001</link>
<description/>
<size>229281</size>
</item><item>
<title>Multi Kernel Learning with Online-Batch Optimization</title>
<category>Paper</category>
<infohash>f2c29885d2175c129188959ca2dcc24e11e09bbd</infohash>
<guid>https://academictorrents.com/details/f2c29885d2175c129188959ca2dcc24e11e09bbd</guid>
<link>https://academictorrents.com/details/f2c29885d2175c129188959ca2dcc24e11e09bbd</link>
<description/>
<size>351513</size>
</item><item>
<title>Error-Correcting Output Codes Library</title>
<category>Paper</category>
<infohash>85dbb42e94f90e3e0f9387458f384205a788839a</infohash>
<guid>https://academictorrents.com/details/85dbb42e94f90e3e0f9387458f384205a788839a</guid>
<link>https://academictorrents.com/details/85dbb42e94f90e3e0f9387458f384205a788839a</link>
<description/>
<size>95186</size>
</item><item>
<title>Near-optimal Regret Bounds for Reinforcement Learning</title>
<category>Paper</category>
<infohash>4f529518067fdcaebcd7a132ca84a640d1571b8a</infohash>
<guid>https://academictorrents.com/details/4f529518067fdcaebcd7a132ca84a640d1571b8a</guid>
<link>https://academictorrents.com/details/4f529518067fdcaebcd7a132ca84a640d1571b8a</link>
<description/>
<size>358457</size>
</item><item>
<title>Mean Field Variational Approximation for Continuous-Time Bayesian Networks</title>
<category>Paper</category>
<infohash>38d6a74592f397d2b774b947d8e01630fab544d8</infohash>
<guid>https://academictorrents.com/details/38d6a74592f397d2b774b947d8e01630fab544d8</guid>
<link>https://academictorrents.com/details/38d6a74592f397d2b774b947d8e01630fab544d8</link>
<description/>
<size>1123543</size>
</item><item>
<title>Learning over Sets using Kernel Principal Angles (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>fa48547a07c7bd76260da274c8502ce2a8e2ce0b</infohash>
<guid>https://academictorrents.com/details/fa48547a07c7bd76260da274c8502ce2a8e2ce0b</guid>
<link>https://academictorrents.com/details/fa48547a07c7bd76260da274c8502ce2a8e2ce0b</link>
<description/>
<size>144512</size>
</item><item>
<title>Image Categorization by Learning and Reasoning with Regions</title>
<category>Paper</category>
<infohash>fb6a9936f3cf21537b29a8a99dea5c3877ee6d91</infohash>
<guid>https://academictorrents.com/details/fb6a9936f3cf21537b29a8a99dea5c3877ee6d91</guid>
<link>https://academictorrents.com/details/fb6a9936f3cf21537b29a8a99dea5c3877ee6d91</link>
<description/>
<size>149864</size>
</item><item>
<title>Policy Search using Paired Comparisons</title>
<category>Paper</category>
<infohash>0492ad5e1760d75603445e1cbf927c68ee038059</infohash>
<guid>https://academictorrents.com/details/0492ad5e1760d75603445e1cbf927c68ee038059</guid>
<link>https://academictorrents.com/details/0492ad5e1760d75603445e1cbf927c68ee038059</link>
<description/>
<size>178543</size>
</item><item>
<title>Optimal Distributed Online Prediction Using Mini-Batches</title>
<category>Paper</category>
<infohash>3d011e776f9208473622f1274fc4185b92b2186d</infohash>
<guid>https://academictorrents.com/details/3d011e776f9208473622f1274fc4185b92b2186d</guid>
<link>https://academictorrents.com/details/3d011e776f9208473622f1274fc4185b92b2186d</link>
<description/>
<size>334874</size>
</item><item>
<title>An Investigation of Missing Data Methods for Classification Trees Applied to Binary Response Data</title>
<category>Paper</category>
<infohash>c97acdd2dd26a94a8852d5869affca4b205de689</infohash>
<guid>https://academictorrents.com/details/c97acdd2dd26a94a8852d5869affca4b205de689</guid>
<link>https://academictorrents.com/details/c97acdd2dd26a94a8852d5869affca4b205de689</link>
<description/>
<size>408002</size>
</item><item>
<title>Consistency of Random Forests and Other Averaging Classifiers</title>
<category>Paper</category>
<infohash>9261b13dd1a091906c79f51af274e57650583679</infohash>
<guid>https://academictorrents.com/details/9261b13dd1a091906c79f51af274e57650583679</guid>
<link>https://academictorrents.com/details/9261b13dd1a091906c79f51af274e57650583679</link>
<description/>
<size>235857</size>
</item><item>
<title>Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics</title>
<category>Paper</category>
<infohash>7a225973c264012100cb17c5b36f645c3689436b</infohash>
<guid>https://academictorrents.com/details/7a225973c264012100cb17c5b36f645c3689436b</guid>
<link>https://academictorrents.com/details/7a225973c264012100cb17c5b36f645c3689436b</link>
<description/>
<size>2442076</size>
</item><item>
<title>Dynamic Policy Programming</title>
<category>Paper</category>
<infohash>29bf1e4d0803773356b58ba5c6727b75eb2212d7</infohash>
<guid>https://academictorrents.com/details/29bf1e4d0803773356b58ba5c6727b75eb2212d7</guid>
<link>https://academictorrents.com/details/29bf1e4d0803773356b58ba5c6727b75eb2212d7</link>
<description/>
<size>360042</size>
</item><item>
<title>WEKAExperiences with a Java Open-Source Project</title>
<category>Paper</category>
<infohash>435721f87df3273b5fd212ba43664f28946844ca</infohash>
<guid>https://academictorrents.com/details/435721f87df3273b5fd212ba43664f28946844ca</guid>
<link>https://academictorrents.com/details/435721f87df3273b5fd212ba43664f28946844ca</link>
<description/>
<size>216113</size>
</item><item>
<title>Beyond Independent Components: Trees and Clusters</title>
<category>Paper</category>
<infohash>e377facbfe5a4a2fb8a80cd62fe9bb6b0c64b7ed</infohash>
<guid>https://academictorrents.com/details/e377facbfe5a4a2fb8a80cd62fe9bb6b0c64b7ed</guid>
<link>https://academictorrents.com/details/e377facbfe5a4a2fb8a80cd62fe9bb6b0c64b7ed</link>
<description/>
<size>239111</size>
</item><item>
<title>How to Explain Individual Classification Decisions</title>
<category>Paper</category>
<infohash>783985a34ad124c9f34c3cdc1c9e1e44daf3daf3</infohash>
<guid>https://academictorrents.com/details/783985a34ad124c9f34c3cdc1c9e1e44daf3daf3</guid>
<link>https://academictorrents.com/details/783985a34ad124c9f34c3cdc1c9e1e44daf3daf3</link>
<description/>
<size>1639263</size>
</item><item>
<title>Training and Testing Low-degree Polynomial Data Mappings via Linear SVM</title>
<category>Paper</category>
<infohash>4eae73f4109e57ed37f44dd42abca3994cbbc709</infohash>
<guid>https://academictorrents.com/details/4eae73f4109e57ed37f44dd42abca3994cbbc709</guid>
<link>https://academictorrents.com/details/4eae73f4109e57ed37f44dd42abca3994cbbc709</link>
<description/>
<size>150509</size>
</item><item>
<title>Approximate Inference on Planar Graphs using Loop Calculus and Belief Propagation</title>
<category>Paper</category>
<infohash>2f4baef4b5a92f3bf9b9b0ee8cdbdf350734eb53</infohash>
<guid>https://academictorrents.com/details/2f4baef4b5a92f3bf9b9b0ee8cdbdf350734eb53</guid>
<link>https://academictorrents.com/details/2f4baef4b5a92f3bf9b9b0ee8cdbdf350734eb53</link>
<description/>
<size>339800</size>
</item><item>
<title>Maximum Relative Margin and Data-Dependent Regularization</title>
<category>Paper</category>
<infohash>db869ba2cbad69ec5a64e02097e4883caf786755</infohash>
<guid>https://academictorrents.com/details/db869ba2cbad69ec5a64e02097e4883caf786755</guid>
<link>https://academictorrents.com/details/db869ba2cbad69ec5a64e02097e4883caf786755</link>
<description/>
<size>466217</size>
</item><item>
<title>Tree Decomposition for Large-Scale SVM Problems</title>
<category>Paper</category>
<infohash>7fb05f96b721e34657763f2902d2acd2063393f9</infohash>
<guid>https://academictorrents.com/details/7fb05f96b721e34657763f2902d2acd2063393f9</guid>
<link>https://academictorrents.com/details/7fb05f96b721e34657763f2902d2acd2063393f9</link>
<description/>
<size>403781</size>
</item><item>
<title>Optimal Search on Clustered Structural Constraint for Learning Bayesian Network Structure</title>
<category>Paper</category>
<infohash>4ee1ee4aad1d7f31e33468aa7c9d7ce642ebfe33</infohash>
<guid>https://academictorrents.com/details/4ee1ee4aad1d7f31e33468aa7c9d7ce642ebfe33</guid>
<link>https://academictorrents.com/details/4ee1ee4aad1d7f31e33468aa7c9d7ce642ebfe33</link>
<description/>
<size>295452</size>
</item><item>
<title>Blind Source Separation via Generalized Eigenvalue Decomposition</title>
<category>Paper</category>
<infohash>807ec78c5f243c58f7ecaefe9024be34ce1743d4</infohash>
<guid>https://academictorrents.com/details/807ec78c5f243c58f7ecaefe9024be34ce1743d4</guid>
<link>https://academictorrents.com/details/807ec78c5f243c58f7ecaefe9024be34ce1743d4</link>
<description/>
<size>546787</size>
</item><item>
<title>Learning Translation Invariant Kernels for Classification</title>
<category>Paper</category>
<infohash>777c5ed841f28d051cfb71b47ab4a8172c7cebc5</infohash>
<guid>https://academictorrents.com/details/777c5ed841f28d051cfb71b47ab4a8172c7cebc5</guid>
<link>https://academictorrents.com/details/777c5ed841f28d051cfb71b47ab4a8172c7cebc5</link>
<description/>
<size>398684</size>
</item><item>
<title>Maximum Likelihood in Cost-Sensitive Learning: Model Specification, Approximations, and Upper Bounds</title>
<category>Paper</category>
<infohash>b2f002d5c1b1d7b170c7f4932d7afbb8e8d98793</infohash>
<guid>https://academictorrents.com/details/b2f002d5c1b1d7b170c7f4932d7afbb8e8d98793</guid>
<link>https://academictorrents.com/details/b2f002d5c1b1d7b170c7f4932d7afbb8e8d98793</link>
<description/>
<size>2013118</size>
</item><item>
<title>Hilbert Space Embeddings and Metrics on Probability Measures</title>
<category>Paper</category>
<infohash>df6ff4362640658f647c606031aa9abda69c079d</infohash>
<guid>https://academictorrents.com/details/df6ff4362640658f647c606031aa9abda69c079d</guid>
<link>https://academictorrents.com/details/df6ff4362640658f647c606031aa9abda69c079d</link>
<description/>
<size>530497</size>
</item><item>
<title>Tree-Structured Neural Decoding</title>
<category>Paper</category>
<infohash>ee445f4745a448c0f32fd1531215ce690d3b1867</infohash>
<guid>https://academictorrents.com/details/ee445f4745a448c0f32fd1531215ce690d3b1867</guid>
<link>https://academictorrents.com/details/ee445f4745a448c0f32fd1531215ce690d3b1867</link>
<description/>
<size>254186</size>
</item><item>
<title>Finding the Most Interesting Patterns in a Database Quickly by Using Sequential Sampling</title>
<category>Paper</category>
<infohash>569ee0e53edf4e9564861d89d17f052d9dba6087</infohash>
<guid>https://academictorrents.com/details/569ee0e53edf4e9564861d89d17f052d9dba6087</guid>
<link>https://academictorrents.com/details/569ee0e53edf4e9564861d89d17f052d9dba6087</link>
<description/>
<size>186164</size>
</item><item>
<title>Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing</title>
<category>Paper</category>
<infohash>7a7ddb4f7ad47182442b2bdd47215e3631970c9c</infohash>
<guid>https://academictorrents.com/details/7a7ddb4f7ad47182442b2bdd47215e3631970c9c</guid>
<link>https://academictorrents.com/details/7a7ddb4f7ad47182442b2bdd47215e3631970c9c</link>
<description/>
<size>3362551</size>
</item><item>
<title>Practical Approaches to Principal Component Analysis in the Presence of Missing Values</title>
<category>Paper</category>
<infohash>761fc1338d7f2a89e8547f28208e3ebdfc5c11c2</infohash>
<guid>https://academictorrents.com/details/761fc1338d7f2a89e8547f28208e3ebdfc5c11c2</guid>
<link>https://academictorrents.com/details/761fc1338d7f2a89e8547f28208e3ebdfc5c11c2</link>
<description/>
<size>616991</size>
</item><item>
<title>Second-Order Bilinear Discriminant Analysis</title>
<category>Paper</category>
<infohash>ac52deadf849bcedf323fd472d3c0051caa8c7d4</infohash>
<guid>https://academictorrents.com/details/ac52deadf849bcedf323fd472d3c0051caa8c7d4</guid>
<link>https://academictorrents.com/details/ac52deadf849bcedf323fd472d3c0051caa8c7d4</link>
<description/>
<size>1120955</size>
</item><item>
<title>Towards Integrative Causal Analysis of Heterogeneous Data Sets and Studies</title>
<category>Paper</category>
<infohash>9ef8e09e06220181e0521d4e4cec1db8ed713233</infohash>
<guid>https://academictorrents.com/details/9ef8e09e06220181e0521d4e4cec1db8ed713233</guid>
<link>https://academictorrents.com/details/9ef8e09e06220181e0521d4e4cec1db8ed713233</link>
<description/>
<size>4046691</size>
</item><item>
<title>Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and Stationary -Mixing Processes</title>
<category>Paper</category>
<infohash>2d42e44af64e4da873db06498a34264f0ca4c5f4</infohash>
<guid>https://academictorrents.com/details/2d42e44af64e4da873db06498a34264f0ca4c5f4</guid>
<link>https://academictorrents.com/details/2d42e44af64e4da873db06498a34264f0ca4c5f4</link>
<description/>
<size>237710</size>
</item><item>
<title>Graphical Methods for Efficient Likelihood Inference in Gaussian Covariance Models</title>
<category>Paper</category>
<infohash>d7a91c029fffd0bcd5aca304fcdfb635d51449df</infohash>
<guid>https://academictorrents.com/details/d7a91c029fffd0bcd5aca304fcdfb635d51449df</guid>
<link>https://academictorrents.com/details/d7a91c029fffd0bcd5aca304fcdfb635d51449df</link>
<description/>
<size>246473</size>
</item><item>
<title>MULTIBOOST: A Multi-purpose Boosting Package</title>
<category>Paper</category>
<infohash>43ba4aab8e4826a7a678aa7656546074a8fe84e0</infohash>
<guid>https://academictorrents.com/details/43ba4aab8e4826a7a678aa7656546074a8fe84e0</guid>
<link>https://academictorrents.com/details/43ba4aab8e4826a7a678aa7656546074a8fe84e0</link>
<description/>
<size>40061</size>
</item><item>
<title>Introduction to the Special Issue on Learning Theory</title>
<category>Paper</category>
<infohash>26f224552202d280661bf7ddd130bdb9af86c005</infohash>
<guid>https://academictorrents.com/details/26f224552202d280661bf7ddd130bdb9af86c005</guid>
<link>https://academictorrents.com/details/26f224552202d280661bf7ddd130bdb9af86c005</link>
<description/>
<size>411958</size>
</item><item>
<title>An Empirical Study of the Use of Relevance Information in Inductive Logic Programming</title>
<category>Paper</category>
<infohash>9ca794dc4c0c8a6bfce301172ab3aa72b499bd1c</infohash>
<guid>https://academictorrents.com/details/9ca794dc4c0c8a6bfce301172ab3aa72b499bd1c</guid>
<link>https://academictorrents.com/details/9ca794dc4c0c8a6bfce301172ab3aa72b499bd1c</link>
<description/>
<size>97690</size>
</item><item>
<title>Message-passing for Graph-structured Linear Programs: Proximal Methods and Rounding Schemes</title>
<category>Paper</category>
<infohash>781510d86759f5991a9a465120e26f4ef5088f6d</infohash>
<guid>https://academictorrents.com/details/781510d86759f5991a9a465120e26f4ef5088f6d</guid>
<link>https://academictorrents.com/details/781510d86759f5991a9a465120e26f4ef5088f6d</link>
<description/>
<size>300917</size>
</item><item>
<title>Bounding the Probability of Error for High Precision Optical Character Recognition</title>
<category>Paper</category>
<infohash>412368484d2ada74fa61a9671199847e1b8156f3</infohash>
<guid>https://academictorrents.com/details/412368484d2ada74fa61a9671199847e1b8156f3</guid>
<link>https://academictorrents.com/details/412368484d2ada74fa61a9671199847e1b8156f3</link>
<description/>
<size>698793</size>
</item><item>
<title>Online Learning for Matrix Factorization and Sparse Coding</title>
<category>Paper</category>
<infohash>54687040e505d30dc4b9c3243d634089c21ca8f2</infohash>
<guid>https://academictorrents.com/details/54687040e505d30dc4b9c3243d634089c21ca8f2</guid>
<link>https://academictorrents.com/details/54687040e505d30dc4b9c3243d634089c21ca8f2</link>
<description/>
<size>4167258</size>
</item><item>
<title>Learning Linear Cyclic Causal Models with Latent Variables</title>
<category>Paper</category>
<infohash>dd421e0587936b94743880c4ffd81401dbb6c372</infohash>
<guid>https://academictorrents.com/details/dd421e0587936b94743880c4ffd81401dbb6c372</guid>
<link>https://academictorrents.com/details/dd421e0587936b94743880c4ffd81401dbb6c372</link>
<description/>
<size>516046</size>
</item><item>
<title>Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models</title>
<category>Paper</category>
<infohash>f6ebbdc8e0fb50dae3718ca954f52ccb7adb4047</infohash>
<guid>https://academictorrents.com/details/f6ebbdc8e0fb50dae3718ca954f52ccb7adb4047</guid>
<link>https://academictorrents.com/details/f6ebbdc8e0fb50dae3718ca954f52ccb7adb4047</link>
<description/>
<size>319906</size>
</item><item>
<title>Classification with a Reject Option using a Hinge Loss</title>
<category>Paper</category>
<infohash>0db1513fac174b28bdb073ea542a430e799bc9ed</infohash>
<guid>https://academictorrents.com/details/0db1513fac174b28bdb073ea542a430e799bc9ed</guid>
<link>https://academictorrents.com/details/0db1513fac174b28bdb073ea542a430e799bc9ed</link>
<description/>
<size>299308</size>
</item><item>
<title>Variational Learning of Clusters of Undercomplete Nonsymmetric Independent Components</title>
<category>Paper</category>
<infohash>3a3601d50ea073a2b63cd09c243c4854db5f369e</infohash>
<guid>https://academictorrents.com/details/3a3601d50ea073a2b63cd09c243c4854db5f369e</guid>
<link>https://academictorrents.com/details/3a3601d50ea073a2b63cd09c243c4854db5f369e</link>
<description/>
<size>337599</size>
</item><item>
<title>ICA Using Spacings Estimates of Entropy</title>
<category>Paper</category>
<infohash>2f3b72e0452c5e062c9ec69c6a9a4c89c6e454a3</infohash>
<guid>https://academictorrents.com/details/2f3b72e0452c5e062c9ec69c6a9a4c89c6e454a3</guid>
<link>https://academictorrents.com/details/2f3b72e0452c5e062c9ec69c6a9a4c89c6e454a3</link>
<description/>
<size>534265</size>
</item><item>
<title>A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression</title>
<category>Paper</category>
<infohash>992fc5a3b795a31d6b455b1d62bc010e7a643304</infohash>
<guid>https://academictorrents.com/details/992fc5a3b795a31d6b455b1d62bc010e7a643304</guid>
<link>https://academictorrents.com/details/992fc5a3b795a31d6b455b1d62bc010e7a643304</link>
<description/>
<size>2705447</size>
</item><item>
<title>Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces</title>
<category>Paper</category>
<infohash>74cb0646ae7131abd9a5a74185769d15f0c12959</infohash>
<guid>https://academictorrents.com/details/74cb0646ae7131abd9a5a74185769d15f0c12959</guid>
<link>https://academictorrents.com/details/74cb0646ae7131abd9a5a74185769d15f0c12959</link>
<description/>
<size>340159</size>
</item><item>
<title>Matching Words and Pictures</title>
<category>Paper</category>
<infohash>d30c702637eeb68a57def309ace329842069bf93</infohash>
<guid>https://academictorrents.com/details/d30c702637eeb68a57def309ace329842069bf93</guid>
<link>https://academictorrents.com/details/d30c702637eeb68a57def309ace329842069bf93</link>
<description/>
<size>241452</size>
</item><item>
<title>Robust Kernel Density Estimation</title>
<category>Paper</category>
<infohash>8f9e1d5c029a01bdc341af554109ac93a6cb2363</infohash>
<guid>https://academictorrents.com/details/8f9e1d5c029a01bdc341af554109ac93a6cb2363</guid>
<link>https://academictorrents.com/details/8f9e1d5c029a01bdc341af554109ac93a6cb2363</link>
<description/>
<size>457798</size>
</item><item>
<title>Efficient Algorithms for Conditional Independence Inference</title>
<category>Paper</category>
<infohash>7db911570613fe0e2aca854b562db381906a270a</infohash>
<guid>https://academictorrents.com/details/7db911570613fe0e2aca854b562db381906a270a</guid>
<link>https://academictorrents.com/details/7db911570613fe0e2aca854b562db381906a270a</link>
<description/>
<size>218063</size>
</item><item>
<title>Generalized Power Method for Sparse Principal Component Analysis</title>
<category>Paper</category>
<infohash>9f3b021d777a45db7a13f44e8bea155b8cb58085</infohash>
<guid>https://academictorrents.com/details/9f3b021d777a45db7a13f44e8bea155b8cb58085</guid>
<link>https://academictorrents.com/details/9f3b021d777a45db7a13f44e8bea155b8cb58085</link>
<description/>
<size>303390</size>
</item><item>
<title>Classification with Incomplete Data Using Dirichlet Process Priors</title>
<category>Paper</category>
<infohash>afb17be0c8af025e713c385fa5ed3d66a4b06c9b</infohash>
<guid>https://academictorrents.com/details/afb17be0c8af025e713c385fa5ed3d66a4b06c9b</guid>
<link>https://academictorrents.com/details/afb17be0c8af025e713c385fa5ed3d66a4b06c9b</link>
<description/>
<size>1177307</size>
</item><item>
<title>Manifold Learning: The Price of Normalization</title>
<category>Paper</category>
<infohash>b32416de370acf8ec5131d8f4099675600e114c4</infohash>
<guid>https://academictorrents.com/details/b32416de370acf8ec5131d8f4099675600e114c4</guid>
<link>https://academictorrents.com/details/b32416de370acf8ec5131d8f4099675600e114c4</link>
<description/>
<size>320122</size>
</item><item>
<title>Local Causal and Markov Blanket Induction for Causal Discovery and Feature Selection for Classification Part I: Algorithms and Empirical Evaluation</title>
<category>Paper</category>
<infohash>23d65f7e7c185f595fb4c850f448f6cea7668c28</infohash>
<guid>https://academictorrents.com/details/23d65f7e7c185f595fb4c850f448f6cea7668c28</guid>
<link>https://academictorrents.com/details/23d65f7e7c185f595fb4c850f448f6cea7668c28</link>
<description/>
<size>2425620</size>
</item><item>
<title>Spectral Regularization Algorithms for Learning Large Incomplete Matrices</title>
<category>Paper</category>
<infohash>4fcb51e85df93faa5dd2c8496117c427eaafae8a</infohash>
<guid>https://academictorrents.com/details/4fcb51e85df93faa5dd2c8496117c427eaafae8a</guid>
<link>https://academictorrents.com/details/4fcb51e85df93faa5dd2c8496117c427eaafae8a</link>
<description/>
<size>569199</size>
</item><item>
<title>Weather Data Mining Using Independent Component Analysis</title>
<category>Paper</category>
<infohash>4ff6ef8523a58c14beafa5e99f512e96bc2f6b10</infohash>
<guid>https://academictorrents.com/details/4ff6ef8523a58c14beafa5e99f512e96bc2f6b10</guid>
<link>https://academictorrents.com/details/4ff6ef8523a58c14beafa5e99f512e96bc2f6b10</link>
<description/>
<size>168930</size>
</item><item>
<title>Large Scale Online Learning of Image Similarity Through Ranking</title>
<category>Paper</category>
<infohash>c95e5c45a8cfa5b3b07d7bcce803e7fc5d4c5284</infohash>
<guid>https://academictorrents.com/details/c95e5c45a8cfa5b3b07d7bcce803e7fc5d4c5284</guid>
<link>https://academictorrents.com/details/c95e5c45a8cfa5b3b07d7bcce803e7fc5d4c5284</link>
<description/>
<size>1167567</size>
</item><item>
<title>Incremental Sigmoid Belief Networks for Grammar Learning</title>
<category>Paper</category>
<infohash>0b615aa4e85be044021050e8abf2834a1da2fe28</infohash>
<guid>https://academictorrents.com/details/0b615aa4e85be044021050e8abf2834a1da2fe28</guid>
<link>https://academictorrents.com/details/0b615aa4e85be044021050e8abf2834a1da2fe28</link>
<description/>
<size>252380</size>
</item><item>
<title>Active Learning of Causal Networks with Intervention Experiments and Optimal Designs(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>518e14455aa670fa96c89c2fa86421a5e19011a6</infohash>
<guid>https://academictorrents.com/details/518e14455aa670fa96c89c2fa86421a5e19011a6</guid>
<link>https://academictorrents.com/details/518e14455aa670fa96c89c2fa86421a5e19011a6</link>
<description/>
<size>233007</size>
</item><item>
<title>Matrix Completion from Noisy Entries</title>
<category>Paper</category>
<infohash>0756c4a008924839bdbb8c0d73113dfa6c636f9d</infohash>
<guid>https://academictorrents.com/details/0756c4a008924839bdbb8c0d73113dfa6c636f9d</guid>
<link>https://academictorrents.com/details/0756c4a008924839bdbb8c0d73113dfa6c636f9d</link>
<description/>
<size>199962</size>
</item><item>
<title>A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design</title>
<category>Paper</category>
<infohash>a0c9ac85b5d5a386250be4451913a03020e78931</infohash>
<guid>https://academictorrents.com/details/a0c9ac85b5d5a386250be4451913a03020e78931</guid>
<link>https://academictorrents.com/details/a0c9ac85b5d5a386250be4451913a03020e78931</link>
<description/>
<size>323663</size>
</item><item>
<title>Distance-Based Classification with Lipschitz Functions (Special Topic on Learning Theory)</title>
<category>Paper</category>
<infohash>e66e7c65690698c34045fb6a89085f020a9d5fcf</infohash>
<guid>https://academictorrents.com/details/e66e7c65690698c34045fb6a89085f020a9d5fcf</guid>
<link>https://academictorrents.com/details/e66e7c65690698c34045fb6a89085f020a9d5fcf</link>
<description/>
<size>153013</size>
</item><item>
<title>Combining Knowledge from Different Sources in Causal Probabilistic Models</title>
<category>Paper</category>
<infohash>3b8ce43d2b727fb96fde0ca485c41589d67bc11e</infohash>
<guid>https://academictorrents.com/details/3b8ce43d2b727fb96fde0ca485c41589d67bc11e</guid>
<link>https://academictorrents.com/details/3b8ce43d2b727fb96fde0ca485c41589d67bc11e</link>
<description/>
<size>152729</size>
</item><item>
<title>Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds</title>
<category>Paper</category>
<infohash>1b5ba31cd3df0adfc9aaadfac2d4ef13fe6ba423</infohash>
<guid>https://academictorrents.com/details/1b5ba31cd3df0adfc9aaadfac2d4ef13fe6ba423</guid>
<link>https://academictorrents.com/details/1b5ba31cd3df0adfc9aaadfac2d4ef13fe6ba423</link>
<description/>
<size>5479883</size>
</item><item>
<title>Quadratic Programming Feature Selection</title>
<category>Paper</category>
<infohash>109e4b0af9eda299d253009b35086107a18f7c3d</infohash>
<guid>https://academictorrents.com/details/109e4b0af9eda299d253009b35086107a18f7c3d</guid>
<link>https://academictorrents.com/details/109e4b0af9eda299d253009b35086107a18f7c3d</link>
<description/>
<size>358442</size>
</item><item>
<title>Efficient Feature Selection via Analysis of Relevance and Redundancy</title>
<category>Paper</category>
<infohash>143139de95c2900b81e43253332964e43579eccb</infohash>
<guid>https://academictorrents.com/details/143139de95c2900b81e43253332964e43579eccb</guid>
<link>https://academictorrents.com/details/143139de95c2900b81e43253332964e43579eccb</link>
<description/>
<size>221157</size>
</item><item>
<title>On the Proper Learning of Axis-Parallel Concepts</title>
<category>Paper</category>
<infohash>9e1cbfb7707e416e8678dbddcd2b8621144e3d8d</infohash>
<guid>https://academictorrents.com/details/9e1cbfb7707e416e8678dbddcd2b8621144e3d8d</guid>
<link>https://academictorrents.com/details/9e1cbfb7707e416e8678dbddcd2b8621144e3d8d</link>
<description/>
<size>190229</size>
</item><item>
<title>Distance Metric Learning with Eigenvalue Optimization</title>
<category>Paper</category>
<infohash>82d7811db75a699d123540f97f5a10ecf82568b0</infohash>
<guid>https://academictorrents.com/details/82d7811db75a699d123540f97f5a10ecf82568b0</guid>
<link>https://academictorrents.com/details/82d7811db75a699d123540f97f5a10ecf82568b0</link>
<description/>
<size>238075</size>
</item><item>
<title>Kernel Independent Component Analysis (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>9aff3a0db5031f9e10b8262bea3ae30e11d2af3d</infohash>
<guid>https://academictorrents.com/details/9aff3a0db5031f9e10b8262bea3ae30e11d2af3d</guid>
<link>https://academictorrents.com/details/9aff3a0db5031f9e10b8262bea3ae30e11d2af3d</link>
<description/>
<size>481878</size>
</item><item>
<title>Consistency of Trace Norm Minimization</title>
<category>Paper</category>
<infohash>23483f229c9ced5b0e0fe4cf631d06ee881a67d9</infohash>
<guid>https://academictorrents.com/details/23483f229c9ced5b0e0fe4cf631d06ee881a67d9</guid>
<link>https://academictorrents.com/details/23483f229c9ced5b0e0fe4cf631d06ee881a67d9</link>
<description/>
<size>168849</size>
</item><item>
<title>Mal-ID: Automatic Malware Detection Using Common Segment Analysis and Meta-Features</title>
<category>Paper</category>
<infohash>6c1daa1fab43805daeddd2c3cb77ab6e2499be05</infohash>
<guid>https://academictorrents.com/details/6c1daa1fab43805daeddd2c3cb77ab6e2499be05</guid>
<link>https://academictorrents.com/details/6c1daa1fab43805daeddd2c3cb77ab6e2499be05</link>
<description/>
<size>486136</size>
</item><item>
<title>A Multi-Stage Framework for Dantzig Selector and LASSO</title>
<category>Paper</category>
<infohash>889cda18e25e3f41a8b9252e5449eeda18548b66</infohash>
<guid>https://academictorrents.com/details/889cda18e25e3f41a8b9252e5449eeda18548b66</guid>
<link>https://academictorrents.com/details/889cda18e25e3f41a8b9252e5449eeda18548b66</link>
<description/>
<size>246478</size>
</item><item>
<title>Stochastic Composite Likelihood</title>
<category>Paper</category>
<infohash>78716f1d436c9cbc4137a8e4552c596bec1256d7</infohash>
<guid>https://academictorrents.com/details/78716f1d436c9cbc4137a8e4552c596bec1256d7</guid>
<link>https://academictorrents.com/details/78716f1d436c9cbc4137a8e4552c596bec1256d7</link>
<description/>
<size>749141</size>
</item><item>
<title>The Minimum Error Minimax Probability Machine</title>
<category>Paper</category>
<infohash>6ef48588ccadd8a66f4fb57556644b46988251ce</infohash>
<guid>https://academictorrents.com/details/6ef48588ccadd8a66f4fb57556644b46988251ce</guid>
<link>https://academictorrents.com/details/6ef48588ccadd8a66f4fb57556644b46988251ce</link>
<description/>
<size>196392</size>
</item><item>
<title>An Efficient Boosting Algorithm for Combining Preferences</title>
<category>Paper</category>
<infohash>776004e8c9e95787577b92fc7d95bd36619ad351</infohash>
<guid>https://academictorrents.com/details/776004e8c9e95787577b92fc7d95bd36619ad351</guid>
<link>https://academictorrents.com/details/776004e8c9e95787577b92fc7d95bd36619ad351</link>
<description/>
<size>208763</size>
</item><item>
<title>Learning Gradients: Predictive Models that Infer Geometry and Statistical Dependence</title>
<category>Paper</category>
<infohash>5f6cc184ae73ffa7eb7e30ebdf5e9f3698c4838a</infohash>
<guid>https://academictorrents.com/details/5f6cc184ae73ffa7eb7e30ebdf5e9f3698c4838a</guid>
<link>https://academictorrents.com/details/5f6cc184ae73ffa7eb7e30ebdf5e9f3698c4838a</link>
<description/>
<size>1715001</size>
</item><item>
<title>Sources of Success for Boosted Wrapper Induction</title>
<category>Paper</category>
<infohash>2e675fd95fd66b8e1ec4484f32b4ac8ea0233257</infohash>
<guid>https://academictorrents.com/details/2e675fd95fd66b8e1ec4484f32b4ac8ea0233257</guid>
<link>https://academictorrents.com/details/2e675fd95fd66b8e1ec4484f32b4ac8ea0233257</link>
<description/>
<size>166046</size>
</item><item>
<title>PAC-learnability of Probabilistic Deterministic Finite State Automata</title>
<category>Paper</category>
<infohash>df1998a71077eb7a760183a3113069ca656304b8</infohash>
<guid>https://academictorrents.com/details/df1998a71077eb7a760183a3113069ca656304b8</guid>
<link>https://academictorrents.com/details/df1998a71077eb7a760183a3113069ca656304b8</link>
<description/>
<size>238050</size>
</item><item>
<title>Multi-task Regression using Minimal Penalties</title>
<category>Paper</category>
<infohash>26f1d4b3dd6d99ab4b66c15207ba1c667d83abe1</infohash>
<guid>https://academictorrents.com/details/26f1d4b3dd6d99ab4b66c15207ba1c667d83abe1</guid>
<link>https://academictorrents.com/details/26f1d4b3dd6d99ab4b66c15207ba1c667d83abe1</link>
<description/>
<size>293140</size>
</item><item>
<title>Feature Discovery in Non-Metric Pairwise Data</title>
<category>Paper</category>
<infohash>fad3010c2d13bf634f90b8c452e96742d6d6bcc7</infohash>
<guid>https://academictorrents.com/details/fad3010c2d13bf634f90b8c452e96742d6d6bcc7</guid>
<link>https://academictorrents.com/details/fad3010c2d13bf634f90b8c452e96742d6d6bcc7</link>
<description/>
<size>255361</size>
</item><item>
<title>An Approximate Analytical Approach to Resampling Averages (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>0460c8c1896c195a47791759abe21686cbb59962</infohash>
<guid>https://academictorrents.com/details/0460c8c1896c195a47791759abe21686cbb59962</guid>
<link>https://academictorrents.com/details/0460c8c1896c195a47791759abe21686cbb59962</link>
<description/>
<size>3158908</size>
</item><item>
<title>On the Convergence Rate of lp-Norm Multiple Kernel Learning</title>
<category>Paper</category>
<infohash>5cf922c6e8277a4e554a5370c8066469c238cd48</infohash>
<guid>https://academictorrents.com/details/5cf922c6e8277a4e554a5370c8066469c238cd48</guid>
<link>https://academictorrents.com/details/5cf922c6e8277a4e554a5370c8066469c238cd48</link>
<description/>
<size>417134</size>
</item><item>
<title>SVMTorch: Support Vector Machines for Large-Scale Regression Problems (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>e56fbff6604eb9544fd702e368479af155406413</infohash>
<guid>https://academictorrents.com/details/e56fbff6604eb9544fd702e368479af155406413</guid>
<link>https://academictorrents.com/details/e56fbff6604eb9544fd702e368479af155406413</link>
<description/>
<size>307974</size>
</item><item>
<title>A Unified Framework for Model-based Clustering</title>
<category>Paper</category>
<infohash>818f8f9a9d516aeb11ca86f8df6d0a5a8febfd45</infohash>
<guid>https://academictorrents.com/details/818f8f9a9d516aeb11ca86f8df6d0a5a8febfd45</guid>
<link>https://academictorrents.com/details/818f8f9a9d516aeb11ca86f8df6d0a5a8febfd45</link>
<description/>
<size>338530</size>
</item><item>
<title>Robust Principal Component Analysis with Adaptive Selection for Tuning Parameters</title>
<category>Paper</category>
<infohash>aaf5ad6fca5f36419ba219aa90bd87c800b18054</infohash>
<guid>https://academictorrents.com/details/aaf5ad6fca5f36419ba219aa90bd87c800b18054</guid>
<link>https://academictorrents.com/details/aaf5ad6fca5f36419ba219aa90bd87c800b18054</link>
<description/>
<size>3296670</size>
</item><item>
<title>Inducing Grammars from Sparse Data Sets: A Survey of Algorithms and Results</title>
<category>Paper</category>
<infohash>c867c760e4256aafb009cc7e5b9d893430c4f6e4</infohash>
<guid>https://academictorrents.com/details/c867c760e4256aafb009cc7e5b9d893430c4f6e4</guid>
<link>https://academictorrents.com/details/c867c760e4256aafb009cc7e5b9d893430c4f6e4</link>
<description/>
<size>249819</size>
</item><item>
<title>Benefitting from the Variables that Variable Selection Discards</title>
<category>Paper</category>
<infohash>b388afb9c9ce20102130a3ecdddf4e685e49797d</infohash>
<guid>https://academictorrents.com/details/b388afb9c9ce20102130a3ecdddf4e685e49797d</guid>
<link>https://academictorrents.com/details/b388afb9c9ce20102130a3ecdddf4e685e49797d</link>
<description/>
<size>176224</size>
</item><item>
<title>Characterization and Greedy Learning of Interventional Markov Equivalence Classes of Directed Acyclic Graphs</title>
<category>Paper</category>
<infohash>294776d4fe640bb3e5ad3dfcd2328f2346c3d0e0</infohash>
<guid>https://academictorrents.com/details/294776d4fe640bb3e5ad3dfcd2328f2346c3d0e0</guid>
<link>https://academictorrents.com/details/294776d4fe640bb3e5ad3dfcd2328f2346c3d0e0</link>
<description/>
<size>1835220</size>
</item><item>
<title>On Inclusion-Driven Learning of Bayesian Networks</title>
<category>Paper</category>
<infohash>fe662fc45e7629b07622f64444a77b710a5729bb</infohash>
<guid>https://academictorrents.com/details/fe662fc45e7629b07622f64444a77b710a5729bb</guid>
<link>https://academictorrents.com/details/fe662fc45e7629b07622f64444a77b710a5729bb</link>
<description/>
<size>245651</size>
</item><item>
<title>Jstacs: A Java Framework for Statistical Analysis and Classification of Biological Sequences</title>
<category>Paper</category>
<infohash>9ff938b61ed7b5e3d5ebfe3278ce56bd5cc5a3f2</infohash>
<guid>https://academictorrents.com/details/9ff938b61ed7b5e3d5ebfe3278ce56bd5cc5a3f2</guid>
<link>https://academictorrents.com/details/9ff938b61ed7b5e3d5ebfe3278ce56bd5cc5a3f2</link>
<description/>
<size>184052</size>
</item><item>
<title>Feature Selection for Unsupervised Learning</title>
<category>Paper</category>
<infohash>9fe1df0d4a7ded57d447c97318b35e60810e32a2</infohash>
<guid>https://academictorrents.com/details/9fe1df0d4a7ded57d447c97318b35e60810e32a2</guid>
<link>https://academictorrents.com/details/9fe1df0d4a7ded57d447c97318b35e60810e32a2</link>
<description/>
<size>317561</size>
</item><item>
<title>Overlearning in Marginal Distribution-Based ICA: Analysis and Solutions</title>
<category>Paper</category>
<infohash>deb20678ef8a6bbb391b199cafc48998a39ffc1d</infohash>
<guid>https://academictorrents.com/details/deb20678ef8a6bbb391b199cafc48998a39ffc1d</guid>
<link>https://academictorrents.com/details/deb20678ef8a6bbb391b199cafc48998a39ffc1d</link>
<description/>
<size>2093826</size>
</item><item>
<title>Learning the Kernel Matrix with Semidefinite Programming</title>
<category>Paper</category>
<infohash>2d9247a0b27d6f1130bdcd8b869830ecd8c6243e</infohash>
<guid>https://academictorrents.com/details/2d9247a0b27d6f1130bdcd8b869830ecd8c6243e</guid>
<link>https://academictorrents.com/details/2d9247a0b27d6f1130bdcd8b869830ecd8c6243e</link>
<description/>
<size>386567</size>
</item><item>
<title>In Defense of One-Vs-All Classification</title>
<category>Paper</category>
<infohash>39551e7bd5f0af1f567a2fef9413938c566f0d3b</infohash>
<guid>https://academictorrents.com/details/39551e7bd5f0af1f567a2fef9413938c566f0d3b</guid>
<link>https://academictorrents.com/details/39551e7bd5f0af1f567a2fef9413938c566f0d3b</link>
<description/>
<size>73107</size>
</item><item>
<title>Learning Ensembles from Bites: A Scalable and Accurate Approach</title>
<category>Paper</category>
<infohash>76fb996864ee7fd80e65c8f6fe44478bb12bbd71</infohash>
<guid>https://academictorrents.com/details/76fb996864ee7fd80e65c8f6fe44478bb12bbd71</guid>
<link>https://academictorrents.com/details/76fb996864ee7fd80e65c8f6fe44478bb12bbd71</link>
<description/>
<size>452372</size>
</item><item>
<title>MLPs (Mono-Layer Polynomials and Multi-Layer Perceptrons) for Nonlinear Modeling</title>
<category>Paper</category>
<infohash>2e520073db4ce011c5a05d5a8b08496d47865f9f</infohash>
<guid>https://academictorrents.com/details/2e520073db4ce011c5a05d5a8b08496d47865f9f</guid>
<link>https://academictorrents.com/details/2e520073db4ce011c5a05d5a8b08496d47865f9f</link>
<description/>
<size>183688</size>
</item><item>
<title>Fast String Kernels using Inexact Matching for Protein Sequences</title>
<category>Paper</category>
<infohash>cd5521215c0dc806e8eba38813d34d8263c5ee3c</infohash>
<guid>https://academictorrents.com/details/cd5521215c0dc806e8eba38813d34d8263c5ee3c</guid>
<link>https://academictorrents.com/details/cd5521215c0dc806e8eba38813d34d8263c5ee3c</link>
<description/>
<size>142587</size>
</item><item>
<title>Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection</title>
<category>Paper</category>
<infohash>2c623a098b9f668b9501b3606ab5f94034d81396</infohash>
<guid>https://academictorrents.com/details/2c623a098b9f668b9501b3606ab5f94034d81396</guid>
<link>https://academictorrents.com/details/2c623a098b9f668b9501b3606ab5f94034d81396</link>
<description/>
<size>706101</size>
</item><item>
<title>Tracking the Best Linear Predictor</title>
<category>Paper</category>
<infohash>c09e7c3fba6fcc802002d6f9a15fe54d8dee5a94</infohash>
<guid>https://academictorrents.com/details/c09e7c3fba6fcc802002d6f9a15fe54d8dee5a94</guid>
<link>https://academictorrents.com/details/c09e7c3fba6fcc802002d6f9a15fe54d8dee5a94</link>
<description/>
<size>476946</size>
</item><item>
<title>Linear Regression With Random Projections</title>
<category>Paper</category>
<infohash>eb003a400246f6e364bf3ac0bb44067055988bd4</infohash>
<guid>https://academictorrents.com/details/eb003a400246f6e364bf3ac0bb44067055988bd4</guid>
<link>https://academictorrents.com/details/eb003a400246f6e364bf3ac0bb44067055988bd4</link>
<description/>
<size>586305</size>
</item><item>
<title>Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction</title>
<category>Paper</category>
<infohash>923d25dc1171104ed2a862fbdfc218ff56f5aa68</infohash>
<guid>https://academictorrents.com/details/923d25dc1171104ed2a862fbdfc218ff56f5aa68</guid>
<link>https://academictorrents.com/details/923d25dc1171104ed2a862fbdfc218ff56f5aa68</link>
<description/>
<size>184334</size>
</item><item>
<title>The Dynamics of AdaBoost: Cyclic Behavior and Convergence of Margins</title>
<category>Paper</category>
<infohash>af7c1049149f10db2ade5a6b9d9b6fecdcecb0a1</infohash>
<guid>https://academictorrents.com/details/af7c1049149f10db2ade5a6b9d9b6fecdcecb0a1</guid>
<link>https://academictorrents.com/details/af7c1049149f10db2ade5a6b9d9b6fecdcecb0a1</link>
<description/>
<size>168449</size>
</item><item>
<title>The Sample Complexity of Exploration in the Multi-Armed Bandit Problem (Special Topic on Learning Theory)</title>
<category>Paper</category>
<infohash>1e3a97735443c7592be48b78733f2349ee0b403f</infohash>
<guid>https://academictorrents.com/details/1e3a97735443c7592be48b78733f2349ee0b403f</guid>
<link>https://academictorrents.com/details/1e3a97735443c7592be48b78733f2349ee0b403f</link>
<description/>
<size>174521</size>
</item><item>
<title>Randomized Variable Elimination</title>
<category>Paper</category>
<infohash>8177336f115e53e3de8a416b97194bfe26e84e6d</infohash>
<guid>https://academictorrents.com/details/8177336f115e53e3de8a416b97194bfe26e84e6d</guid>
<link>https://academictorrents.com/details/8177336f115e53e3de8a416b97194bfe26e84e6d</link>
<description/>
<size>386372</size>
</item><item>
<title>Multi-Instance Learning with Any Hypothesis Class</title>
<category>Paper</category>
<infohash>cd9c87f148a42e541d41ac5356c359309a8049f9</infohash>
<guid>https://academictorrents.com/details/cd9c87f148a42e541d41ac5356c359309a8049f9</guid>
<link>https://academictorrents.com/details/cd9c87f148a42e541d41ac5356c359309a8049f9</link>
<description/>
<size>328574</size>
</item><item>
<title>A New Algorithm for Estimating the Effective Dimension-Reduction Subspace</title>
<category>Paper</category>
<infohash>b8da80a6506bb92ee0361848b4106e2de6aec52d</infohash>
<guid>https://academictorrents.com/details/b8da80a6506bb92ee0361848b4106e2de6aec52d</guid>
<link>https://academictorrents.com/details/b8da80a6506bb92ee0361848b4106e2de6aec52d</link>
<description/>
<size>292054</size>
</item><item>
<title>High-Dimensional Gaussian Graphical Model Selection: Walk Summability and Local Separation Criterion</title>
<category>Paper</category>
<infohash>85d39ec1436373da7c96212f48c59c7f50acc651</infohash>
<guid>https://academictorrents.com/details/85d39ec1436373da7c96212f48c59c7f50acc651</guid>
<link>https://academictorrents.com/details/85d39ec1436373da7c96212f48c59c7f50acc651</link>
<description/>
<size>380313</size>
</item><item>
<title>The Subspace Information Criterion for Infinite Dimensional Hypothesis Spaces (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>309dbeb9e5cd88d8bc7ad02d109dd5674549adea</infohash>
<guid>https://academictorrents.com/details/309dbeb9e5cd88d8bc7ad02d109dd5674549adea</guid>
<link>https://academictorrents.com/details/309dbeb9e5cd88d8bc7ad02d109dd5674549adea</link>
<description/>
<size>675842</size>
</item><item>
<title>Coupled Clustering: A Method for Detecting Structural Correspondence</title>
<category>Paper</category>
<infohash>31bb1d49aa29c536ff8f4edfbb1ae35fdf766135</infohash>
<guid>https://academictorrents.com/details/31bb1d49aa29c536ff8f4edfbb1ae35fdf766135</guid>
<link>https://academictorrents.com/details/31bb1d49aa29c536ff8f4edfbb1ae35fdf766135</link>
<description/>
<size>111017</size>
</item><item>
<title>Static Prediction Games for Adversarial Learning Problems</title>
<category>Paper</category>
<infohash>cdd39295089b63185ca2fb6009858297a3125d0b</infohash>
<guid>https://academictorrents.com/details/cdd39295089b63185ca2fb6009858297a3125d0b</guid>
<link>https://academictorrents.com/details/cdd39295089b63185ca2fb6009858297a3125d0b</link>
<description/>
<size>619681</size>
</item><item>
<title>Multi-Target Regression with Rule Ensembles</title>
<category>Paper</category>
<infohash>e363d82858aa3aea8a32cad442a29ac8d243b333</infohash>
<guid>https://academictorrents.com/details/e363d82858aa3aea8a32cad442a29ac8d243b333</guid>
<link>https://academictorrents.com/details/e363d82858aa3aea8a32cad442a29ac8d243b333</link>
<description/>
<size>328302</size>
</item><item>
<title>Learning Reliable Classifiers From Small or Incomplete Data Sets: The Naive Credal Classifier 2</title>
<category>Paper</category>
<infohash>f62507cd6f2f5fd5a248b3f43706d8909be1b27d</infohash>
<guid>https://academictorrents.com/details/f62507cd6f2f5fd5a248b3f43706d8909be1b27d</guid>
<link>https://academictorrents.com/details/f62507cd6f2f5fd5a248b3f43706d8909be1b27d</link>
<description/>
<size>267556</size>
</item><item>
<title>Large-Sample Learning of Bayesian Networks is NP-Hard</title>
<category>Paper</category>
<infohash>9a4b9f335fc34e1926309b3c6d81900cefe850ce</infohash>
<guid>https://academictorrents.com/details/9a4b9f335fc34e1926309b3c6d81900cefe850ce</guid>
<link>https://academictorrents.com/details/9a4b9f335fc34e1926309b3c6d81900cefe850ce</link>
<description/>
<size>342067</size>
</item><item>
<title>An Introduction to Variable and Feature Selection (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>29d693fc13be423a38598a7d000c933185f29c90</infohash>
<guid>https://academictorrents.com/details/29d693fc13be423a38598a7d000c933185f29c90</guid>
<link>https://academictorrents.com/details/29d693fc13be423a38598a7d000c933185f29c90</link>
<description/>
<size>763446</size>
</item><item>
<title>Structured Sparsity via Alternating Direction Methods</title>
<category>Paper</category>
<infohash>076ca91cc7f0bd7bb802b6622a5d46ec49b280ea</infohash>
<guid>https://academictorrents.com/details/076ca91cc7f0bd7bb802b6622a5d46ec49b280ea</guid>
<link>https://academictorrents.com/details/076ca91cc7f0bd7bb802b6622a5d46ec49b280ea</link>
<description/>
<size>629885</size>
</item><item>
<title>MedLDA: Maximum Margin Supervised Topic Models</title>
<category>Paper</category>
<infohash>52bb0a07bf9bf6bda93e6178b42bcf3a4e9360a0</infohash>
<guid>https://academictorrents.com/details/52bb0a07bf9bf6bda93e6178b42bcf3a4e9360a0</guid>
<link>https://academictorrents.com/details/52bb0a07bf9bf6bda93e6178b42bcf3a4e9360a0</link>
<description/>
<size>1321456</size>
</item><item>
<title>EP-GIG Priors and Applications in Bayesian Sparse Learning</title>
<category>Paper</category>
<infohash>b590650675dc8b49b4562bd82144f5649c2f6b29</infohash>
<guid>https://academictorrents.com/details/b590650675dc8b49b4562bd82144f5649c2f6b29</guid>
<link>https://academictorrents.com/details/b590650675dc8b49b4562bd82144f5649c2f6b29</link>
<description/>
<size>1228428</size>
</item><item>
<title>Ranking Individuals by Group Comparisons</title>
<category>Paper</category>
<infohash>3540028f12669876e8c36cc9cc7bbd38ed239e22</infohash>
<guid>https://academictorrents.com/details/3540028f12669876e8c36cc9cc7bbd38ed239e22</guid>
<link>https://academictorrents.com/details/3540028f12669876e8c36cc9cc7bbd38ed239e22</link>
<description/>
<size>158945</size>
</item><item>
<title>Finite-Sample Analysis of Least-Squares Policy Iteration</title>
<category>Paper</category>
<infohash>a0980d654eec0d65bfd5d0fb15c04141b46eb5c7</infohash>
<guid>https://academictorrents.com/details/a0980d654eec0d65bfd5d0fb15c04141b46eb5c7</guid>
<link>https://academictorrents.com/details/a0980d654eec0d65bfd5d0fb15c04141b46eb5c7</link>
<description/>
<size>290026</size>
</item><item>
<title>NIMFA : A Python Library for Nonnegative Matrix Factorization</title>
<category>Paper</category>
<infohash>636e004329b9da7452feab9e543c7bca2a3af65f</infohash>
<guid>https://academictorrents.com/details/636e004329b9da7452feab9e543c7bca2a3af65f</guid>
<link>https://academictorrents.com/details/636e004329b9da7452feab9e543c7bca2a3af65f</link>
<description/>
<size>35812</size>
</item><item>
<title>Optimistic Bayesian Sampling in Contextual-Bandit Problems</title>
<category>Paper</category>
<infohash>0b5365e25cf26e1a7592b634ead614db82ed6031</infohash>
<guid>https://academictorrents.com/details/0b5365e25cf26e1a7592b634ead614db82ed6031</guid>
<link>https://academictorrents.com/details/0b5365e25cf26e1a7592b634ead614db82ed6031</link>
<description/>
<size>9990704</size>
</item><item>
<title>Cluster Ensembles --- A Knowledge Reuse Framework for Combining Multiple Partitions</title>
<category>Paper</category>
<infohash>f5c1c6514a5d6adb80fe84966ee28e5d8294f83d</infohash>
<guid>https://academictorrents.com/details/f5c1c6514a5d6adb80fe84966ee28e5d8294f83d</guid>
<link>https://academictorrents.com/details/f5c1c6514a5d6adb80fe84966ee28e5d8294f83d</link>
<description/>
<size>29619</size>
</item><item>
<title>Ranking a Random Feature for Variable and Feature Selection</title>
<category>Paper</category>
<infohash>c65d372e2a2e00f1ff3608764a18a66708ed5cab</infohash>
<guid>https://academictorrents.com/details/c65d372e2a2e00f1ff3608764a18a66708ed5cab</guid>
<link>https://academictorrents.com/details/c65d372e2a2e00f1ff3608764a18a66708ed5cab</link>
<description/>
<size>142448</size>
</item><item>
<title>Nonparametric Guidance of Autoencoder Representations using Label Information</title>
<category>Paper</category>
<infohash>53924a521b9c9d67c89a2da3fa8e8f8f1f1c3937</infohash>
<guid>https://academictorrents.com/details/53924a521b9c9d67c89a2da3fa8e8f8f1f1c3937</guid>
<link>https://academictorrents.com/details/53924a521b9c9d67c89a2da3fa8e8f8f1f1c3937</link>
<description/>
<size>816490</size>
</item><item>
<title>A Case Study on Meta-Generalising: A Gaussian Processes Approach</title>
<category>Paper</category>
<infohash>98cf97e5ff54db7b6d12d0b4f98e7002feddad7e</infohash>
<guid>https://academictorrents.com/details/98cf97e5ff54db7b6d12d0b4f98e7002feddad7e</guid>
<link>https://academictorrents.com/details/98cf97e5ff54db7b6d12d0b4f98e7002feddad7e</link>
<description/>
<size>390476</size>
</item><item>
<title>Efficient Methods for Robust Classification Under Uncertainty in Kernel Matrices</title>
<category>Paper</category>
<infohash>d7a243353eec534de466eeb3f2f40b51d9ae2895</infohash>
<guid>https://academictorrents.com/details/d7a243353eec534de466eeb3f2f40b51d9ae2895</guid>
<link>https://academictorrents.com/details/d7a243353eec534de466eeb3f2f40b51d9ae2895</link>
<description/>
<size>395104</size>
</item><item>
<title>Learning Monotone DNF from a Teacher that Almost Does Not Answer Membership Queries</title>
<category>Paper</category>
<infohash>5a5ca85b282b3b1847523789aa7e164d9a7d83c6</infohash>
<guid>https://academictorrents.com/details/5a5ca85b282b3b1847523789aa7e164d9a7d83c6</guid>
<link>https://academictorrents.com/details/5a5ca85b282b3b1847523789aa7e164d9a7d83c6</link>
<description/>
<size>151732</size>
</item><item>
<title>Structured Sparsity and Generalization</title>
<category>Paper</category>
<infohash>ceebd84f2e899b59b7210e16cca72f598dffd293</infohash>
<guid>https://academictorrents.com/details/ceebd84f2e899b59b7210e16cca72f598dffd293</guid>
<link>https://academictorrents.com/details/ceebd84f2e899b59b7210e16cca72f598dffd293</link>
<description/>
<size>146956</size>
</item><item>
<title>Kernel Methods for Relation Extraction</title>
<category>Paper</category>
<infohash>7856d9f782cdb73ed80434bc67777063090a0032</infohash>
<guid>https://academictorrents.com/details/7856d9f782cdb73ed80434bc67777063090a0032</guid>
<link>https://academictorrents.com/details/7856d9f782cdb73ed80434bc67777063090a0032</link>
<description/>
<size>1699537</size>
</item><item>
<title>Positive Semidefinite Metric Learning Using Boosting-like Algorithms</title>
<category>Paper</category>
<infohash>4b9d141736b5842aa7e384fc1e83e7acd5f815b1</infohash>
<guid>https://academictorrents.com/details/4b9d141736b5842aa7e384fc1e83e7acd5f815b1</guid>
<link>https://academictorrents.com/details/4b9d141736b5842aa7e384fc1e83e7acd5f815b1</link>
<description/>
<size>520965</size>
</item><item>
<title>An Improved GLMNET for L1-regularized Logistic Regression</title>
<category>Paper</category>
<infohash>7e2714ef3c9935f7b69f3d340e2c87de325b2cd2</infohash>
<guid>https://academictorrents.com/details/7e2714ef3c9935f7b69f3d340e2c87de325b2cd2</guid>
<link>https://academictorrents.com/details/7e2714ef3c9935f7b69f3d340e2c87de325b2cd2</link>
<description/>
<size>549240</size>
</item><item>
<title>Rejoinder to Reponses to Evidence Contrary to the Statistical View of Boosting</title>
<category>Paper</category>
<infohash>afff4d2d8d6638904760b72dfd7025a031e7562d</infohash>
<guid>https://academictorrents.com/details/afff4d2d8d6638904760b72dfd7025a031e7562d</guid>
<link>https://academictorrents.com/details/afff4d2d8d6638904760b72dfd7025a031e7562d</link>
<description/>
<size>362151</size>
</item><item>
<title>An Introduction to Artificial Prediction Markets for Classification</title>
<category>Paper</category>
<infohash>d410f824cb7401a3b0f341279e0e4947cc5f6105</infohash>
<guid>https://academictorrents.com/details/d410f824cb7401a3b0f341279e0e4947cc5f6105</guid>
<link>https://academictorrents.com/details/d410f824cb7401a3b0f341279e0e4947cc5f6105</link>
<description/>
<size>382028</size>
</item><item>
<title>Learning Symbolic Representations of Hybrid Dynamical Systems</title>
<category>Paper</category>
<infohash>f41f4f536d2b60782911e1bfa257078d23465384</infohash>
<guid>https://academictorrents.com/details/f41f4f536d2b60782911e1bfa257078d23465384</guid>
<link>https://academictorrents.com/details/f41f4f536d2b60782911e1bfa257078d23465384</link>
<description/>
<size>1985417</size>
</item><item>
<title>Efficient Algorithms for Decision Tree Cross-validation</title>
<category>Paper</category>
<infohash>f6084af01c3b5f141357da7650a46652b7819e22</infohash>
<guid>https://academictorrents.com/details/f6084af01c3b5f141357da7650a46652b7819e22</guid>
<link>https://academictorrents.com/details/f6084af01c3b5f141357da7650a46652b7819e22</link>
<description/>
<size>1012368</size>
</item><item>
<title>JNCC2: The Java Implementation Of Naive Credal Classifier 2(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>abe316b76c07662377614dd1d3deeaf7bbd4abfb</infohash>
<guid>https://academictorrents.com/details/abe316b76c07662377614dd1d3deeaf7bbd4abfb</guid>
<link>https://academictorrents.com/details/abe316b76c07662377614dd1d3deeaf7bbd4abfb</link>
<description/>
<size>184931</size>
</item><item>
<title>On Boosting with Polynomially Bounded Distributions</title>
<category>Paper</category>
<infohash>329dd577181e340c38c987edcf47906ce5700777</infohash>
<guid>https://academictorrents.com/details/329dd577181e340c38c987edcf47906ce5700777</guid>
<link>https://academictorrents.com/details/329dd577181e340c38c987edcf47906ce5700777</link>
<description/>
<size>285321</size>
</item><item>
<title>Optimization Techniques for Semi-Supervised Support Vector Machines</title>
<category>Paper</category>
<infohash>239d78675e2c2f64f5d6544ff85b5d206f451a26</infohash>
<guid>https://academictorrents.com/details/239d78675e2c2f64f5d6544ff85b5d206f451a26</guid>
<link>https://academictorrents.com/details/239d78675e2c2f64f5d6544ff85b5d206f451a26</link>
<description/>
<size>48167</size>
</item><item>
<title>The Set Covering Machine</title>
<category>Paper</category>
<infohash>d7e14370bebe943dc6a4cef89a742de52a6a28c4</infohash>
<guid>https://academictorrents.com/details/d7e14370bebe943dc6a4cef89a742de52a6a28c4</guid>
<link>https://academictorrents.com/details/d7e14370bebe943dc6a4cef89a742de52a6a28c4</link>
<description/>
<size>599848</size>
</item><item>
<title>Stopping Criterion for Boosting-Based Data Reduction Techniques: from Binary to Multiclass Problem</title>
<category>Paper</category>
<infohash>0de748dc086791b5cde497b0dca4e896614a7d69</infohash>
<guid>https://academictorrents.com/details/0de748dc086791b5cde497b0dca4e896614a7d69</guid>
<link>https://academictorrents.com/details/0de748dc086791b5cde497b0dca4e896614a7d69</link>
<description/>
<size>274080</size>
</item><item>
<title>DEAP: Evolutionary Algorithms Made Easy</title>
<category>Paper</category>
<infohash>ac9a858e4781b9b8a0acb2d12a4ef291e6509fa9</infohash>
<guid>https://academictorrents.com/details/ac9a858e4781b9b8a0acb2d12a4ef291e6509fa9</guid>
<link>https://academictorrents.com/details/ac9a858e4781b9b8a0acb2d12a4ef291e6509fa9</link>
<description/>
<size>302542</size>
</item><item>
<title>Learning Probabilistic Models of Link Structure</title>
<category>Paper</category>
<infohash>05efe66eb343ac4873beadfff6fc89df0ab83eb4</infohash>
<guid>https://academictorrents.com/details/05efe66eb343ac4873beadfff6fc89df0ab83eb4</guid>
<link>https://academictorrents.com/details/05efe66eb343ac4873beadfff6fc89df0ab83eb4</link>
<description/>
<size>1989259</size>
</item><item>
<title>Rademacher and Gaussian Complexities: Risk Bounds and Structural Results</title>
<category>Paper</category>
<infohash>e2c999aee9996bdd2ec49ee83c38cf51680c4e45</infohash>
<guid>https://academictorrents.com/details/e2c999aee9996bdd2ec49ee83c38cf51680c4e45</guid>
<link>https://academictorrents.com/details/e2c999aee9996bdd2ec49ee83c38cf51680c4e45</link>
<description/>
<size>309448</size>
</item><item>
<title>Graphical Models for Structured Classification, with an Application to Interpreting Images of Protein Subcellular Location Patterns</title>
<category>Paper</category>
<infohash>a84843e9bb71109eff0386b39acb4f70a58b4d04</infohash>
<guid>https://academictorrents.com/details/a84843e9bb71109eff0386b39acb4f70a58b4d04</guid>
<link>https://academictorrents.com/details/a84843e9bb71109eff0386b39acb4f70a58b4d04</link>
<description/>
<size>162578</size>
</item><item>
<title>Stationary Features and Cat Detection</title>
<category>Paper</category>
<infohash>5b025381a593836a068f13e3b07f039dd4e5222f</infohash>
<guid>https://academictorrents.com/details/5b025381a593836a068f13e3b07f039dd4e5222f</guid>
<link>https://academictorrents.com/details/5b025381a593836a068f13e3b07f039dd4e5222f</link>
<description/>
<size>286029</size>
</item><item>
<title>-MDPs: Learning in Varying Environments</title>
<category>Paper</category>
<infohash>3634303099110d62a04931a61d765db4eafe994d</infohash>
<guid>https://academictorrents.com/details/3634303099110d62a04931a61d765db4eafe994d</guid>
<link>https://academictorrents.com/details/3634303099110d62a04931a61d765db4eafe994d</link>
<description/>
<size>292579</size>
</item><item>
<title>A Tutorial on Conformal Prediction</title>
<category>Paper</category>
<infohash>350b9cfb3613f55812577cf980465694de9cdc74</infohash>
<guid>https://academictorrents.com/details/350b9cfb3613f55812577cf980465694de9cdc74</guid>
<link>https://academictorrents.com/details/350b9cfb3613f55812577cf980465694de9cdc74</link>
<description/>
<size>75336</size>
</item><item>
<title>Structural Learning of Chain Graphs via Decomposition</title>
<category>Paper</category>
<infohash>677280641c8076dda3e04887360a272b2405ab0a</infohash>
<guid>https://academictorrents.com/details/677280641c8076dda3e04887360a272b2405ab0a</guid>
<link>https://academictorrents.com/details/677280641c8076dda3e04887360a272b2405ab0a</link>
<description/>
<size>809442</size>
</item><item>
<title>Online Submodular Minimization</title>
<category>Paper</category>
<infohash>5533d2cdec46d72ab67cc9126cf262b8ac6be883</infohash>
<guid>https://academictorrents.com/details/5533d2cdec46d72ab67cc9126cf262b8ac6be883</guid>
<link>https://academictorrents.com/details/5533d2cdec46d72ab67cc9126cf262b8ac6be883</link>
<description/>
<size>140144</size>
</item><item>
<title>Distributional Word Clusters vs. Words for Text Categorization (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>658627300dd23648f58177f238327eb0832871c1</infohash>
<guid>https://academictorrents.com/details/658627300dd23648f58177f238327eb0832871c1</guid>
<link>https://academictorrents.com/details/658627300dd23648f58177f238327eb0832871c1</link>
<description/>
<size>140095</size>
</item><item>
<title>Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning</title>
<category>Paper</category>
<infohash>757245e31fd10a11c8794d3384dbffbdf97fd1e1</infohash>
<guid>https://academictorrents.com/details/757245e31fd10a11c8794d3384dbffbdf97fd1e1</guid>
<link>https://academictorrents.com/details/757245e31fd10a11c8794d3384dbffbdf97fd1e1</link>
<description/>
<size>2179310</size>
</item><item>
<title>Use of the Zero-Norm with Linear Models and Kernel Methods (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>8a36766d87a75f389fa1aa990fa71f0731363b9a</infohash>
<guid>https://academictorrents.com/details/8a36766d87a75f389fa1aa990fa71f0731363b9a</guid>
<link>https://academictorrents.com/details/8a36766d87a75f389fa1aa990fa71f0731363b9a</link>
<description/>
<size>230272</size>
</item><item>
<title>Learning to Construct Fast Signal Processing Implementations</title>
<category>Paper</category>
<infohash>f7ae3b62362210a54498015cbc5f94db1ea2ee4d</infohash>
<guid>https://academictorrents.com/details/f7ae3b62362210a54498015cbc5f94db1ea2ee4d</guid>
<link>https://academictorrents.com/details/f7ae3b62362210a54498015cbc5f94db1ea2ee4d</link>
<description/>
<size>283990</size>
</item><item>
<title>A Divisive Information-Theoretic Feature Clustering Algorithm for Text Classification (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>244c23e3a7062823ef49de70eab0eb51f2c3468c</infohash>
<guid>https://academictorrents.com/details/244c23e3a7062823ef49de70eab0eb51f2c3468c</guid>
<link>https://academictorrents.com/details/244c23e3a7062823ef49de70eab0eb51f2c3468c</link>
<description/>
<size>301961</size>
</item><item>
<title>Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>f31b0ef2048424d3b7b66538d6c4da6f7a3c5db5</infohash>
<guid>https://academictorrents.com/details/f31b0ef2048424d3b7b66538d6c4da6f7a3c5db5</guid>
<link>https://academictorrents.com/details/f31b0ef2048424d3b7b66538d6c4da6f7a3c5db5</link>
<description/>
<size>253433</size>
</item><item>
<title>Learning Precise Timing with LSTM Recurrent Networks</title>
<category>Paper</category>
<infohash>f018ed13be7e4ca2f00b740524c3b36d5196cf55</infohash>
<guid>https://academictorrents.com/details/f018ed13be7e4ca2f00b740524c3b36d5196cf55</guid>
<link>https://academictorrents.com/details/f018ed13be7e4ca2f00b740524c3b36d5196cf55</link>
<description/>
<size>341852</size>
</item><item>
<title>Magic Moments for Structured Output Prediction</title>
<category>Paper</category>
<infohash>1c4fbdb2caa3b21d17d20cb8c001e434ceaec178</infohash>
<guid>https://academictorrents.com/details/1c4fbdb2caa3b21d17d20cb8c001e434ceaec178</guid>
<link>https://academictorrents.com/details/1c4fbdb2caa3b21d17d20cb8c001e434ceaec178</link>
<description/>
<size>264130</size>
</item><item>
<title>Search for Additive Nonlinear Time Series Causal Models</title>
<category>Paper</category>
<infohash>bc6a95f3a1ec7d89e52d1e6a4ae661dbaa769985</infohash>
<guid>https://academictorrents.com/details/bc6a95f3a1ec7d89e52d1e6a4ae661dbaa769985</guid>
<link>https://academictorrents.com/details/bc6a95f3a1ec7d89e52d1e6a4ae661dbaa769985</link>
<description/>
<size>784415</size>
</item><item>
<title>Robust Submodular Observation Selection</title>
<category>Paper</category>
<infohash>eaf6521c44b5ca8db8d93c57b324ff4c35b34f32</infohash>
<guid>https://academictorrents.com/details/eaf6521c44b5ca8db8d93c57b324ff4c35b34f32</guid>
<link>https://academictorrents.com/details/eaf6521c44b5ca8db8d93c57b324ff4c35b34f32</link>
<description/>
<size>25343849</size>
</item><item>
<title>Causal Reasoning with Ancestral Graphs(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>b19ab3f5b9d697fabd43259db5dc42948356adc3</infohash>
<guid>https://academictorrents.com/details/b19ab3f5b9d697fabd43259db5dc42948356adc3</guid>
<link>https://academictorrents.com/details/b19ab3f5b9d697fabd43259db5dc42948356adc3</link>
<description/>
<size>728929</size>
</item><item>
<title>Incremental Identification of Qualitative Models of Biological Systems using Inductive Logic Programming</title>
<category>Paper</category>
<infohash>28acd76f57bb2194087ef9a57b13d6a4adb2edb5</infohash>
<guid>https://academictorrents.com/details/28acd76f57bb2194087ef9a57b13d6a4adb2edb5</guid>
<link>https://academictorrents.com/details/28acd76f57bb2194087ef9a57b13d6a4adb2edb5</link>
<description/>
<size>275986</size>
</item><item>
<title>Data-dependent margin-based generalization bounds for classification</title>
<category>Paper</category>
<infohash>28a4a9d084522d1bf2ffeb3163e2e771a8f8f34c</infohash>
<guid>https://academictorrents.com/details/28a4a9d084522d1bf2ffeb3163e2e771a8f8f34c</guid>
<link>https://academictorrents.com/details/28a4a9d084522d1bf2ffeb3163e2e771a8f8f34c</link>
<description/>
<size>298604</size>
</item><item>
<title>Mixed Membership Stochastic Blockmodels</title>
<category>Paper</category>
<infohash>2d9fa4c14ce0ea510fcbb35cdf2c1c026596dcaf</infohash>
<guid>https://academictorrents.com/details/2d9fa4c14ce0ea510fcbb35cdf2c1c026596dcaf</guid>
<link>https://academictorrents.com/details/2d9fa4c14ce0ea510fcbb35cdf2c1c026596dcaf</link>
<description/>
<size>140425</size>
</item><item>
<title>Minimal Kernel Classifiers (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>6c07eeed3d15b409e1e61afb2f35998f787f678e</infohash>
<guid>https://academictorrents.com/details/6c07eeed3d15b409e1e61afb2f35998f787f678e</guid>
<link>https://academictorrents.com/details/6c07eeed3d15b409e1e61afb2f35998f787f678e</link>
<description/>
<size>408146</size>
</item><item>
<title>Aggregation of SVM Classifiers Using Sobolev Spaces</title>
<category>Paper</category>
<infohash>a717b062ca3e26c659a513eb1cf48a2d7e760697</infohash>
<guid>https://academictorrents.com/details/a717b062ca3e26c659a513eb1cf48a2d7e760697</guid>
<link>https://academictorrents.com/details/a717b062ca3e26c659a513eb1cf48a2d7e760697</link>
<description/>
<size>197605</size>
</item><item>
<title>Multi-Agent Reinforcement Learning in Common Interest and Fixed Sum Stochastic Games: An Experimental Study</title>
<category>Paper</category>
<infohash>905a1856ae59c83e46961c39774a98d5e93b6f5d</infohash>
<guid>https://academictorrents.com/details/905a1856ae59c83e46961c39774a98d5e93b6f5d</guid>
<link>https://academictorrents.com/details/905a1856ae59c83e46961c39774a98d5e93b6f5d</link>
<description/>
<size>31044</size>
</item><item>
<title>Multiple-Instance Learning of Real-Valued Data</title>
<category>Paper</category>
<infohash>936a92932c01c3f5e9994ae8bd2115f4ccb4adc9</infohash>
<guid>https://academictorrents.com/details/936a92932c01c3f5e9994ae8bd2115f4ccb4adc9</guid>
<link>https://academictorrents.com/details/936a92932c01c3f5e9994ae8bd2115f4ccb4adc9</link>
<description/>
<size>7100</size>
</item><item>
<title>Theoretical Advantages of Lenient Learners: An Evolutionary Game Theoretic Perspective</title>
<category>Paper</category>
<infohash>378f768db789e965e18e82e6537655a4bd37fe12</infohash>
<guid>https://academictorrents.com/details/378f768db789e965e18e82e6537655a4bd37fe12</guid>
<link>https://academictorrents.com/details/378f768db789e965e18e82e6537655a4bd37fe12</link>
<description/>
<size>293983</size>
</item><item>
<title>Closed Sets for Labeled Data</title>
<category>Paper</category>
<infohash>bc40153090478105e03845c9168eef0fe41de929</infohash>
<guid>https://academictorrents.com/details/bc40153090478105e03845c9168eef0fe41de929</guid>
<link>https://academictorrents.com/details/bc40153090478105e03845c9168eef0fe41de929</link>
<description/>
<size>546173</size>
</item><item>
<title>Value Function Approximation using Multiple Aggregation for Multiattribute Resource Management</title>
<category>Paper</category>
<infohash>5950e2f9531fcafd052a29e6253fced3d0e29fe6</infohash>
<guid>https://academictorrents.com/details/5950e2f9531fcafd052a29e6253fced3d0e29fe6</guid>
<link>https://academictorrents.com/details/5950e2f9531fcafd052a29e6253fced3d0e29fe6</link>
<description/>
<size>1338397</size>
</item><item>
<title>Online Learning of Complex Prediction Problems Using Simultaneous Projections</title>
<category>Paper</category>
<infohash>f784e9f849bc0f4142145161b23ec7cf181ef026</infohash>
<guid>https://academictorrents.com/details/f784e9f849bc0f4142145161b23ec7cf181ef026</guid>
<link>https://academictorrents.com/details/f784e9f849bc0f4142145161b23ec7cf181ef026</link>
<description/>
<size>490020</size>
</item><item>
<title>Support Vector Machinery for Infinite Ensemble Learning</title>
<category>Paper</category>
<infohash>13837aac7d63d8f20681f7e6d6af6d0495b06cc7</infohash>
<guid>https://academictorrents.com/details/13837aac7d63d8f20681f7e6d6af6d0495b06cc7</guid>
<link>https://academictorrents.com/details/13837aac7d63d8f20681f7e6d6af6d0495b06cc7</link>
<description/>
<size>153849</size>
</item><item>
<title>An Information Criterion for Variable Selection in Support Vector Machines(Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>ce3807fa7cfe63dc402cea83a8f5d900fd7cd458</infohash>
<guid>https://academictorrents.com/details/ce3807fa7cfe63dc402cea83a8f5d900fd7cd458</guid>
<link>https://academictorrents.com/details/ce3807fa7cfe63dc402cea83a8f5d900fd7cd458</link>
<description/>
<size>764429</size>
</item><item>
<title>Probabilistic Characterization of Random Decision Trees</title>
<category>Paper</category>
<infohash>cf742ce887877ed5fd5e5f52526bcbe30c222b00</infohash>
<guid>https://academictorrents.com/details/cf742ce887877ed5fd5e5f52526bcbe30c222b00</guid>
<link>https://academictorrents.com/details/cf742ce887877ed5fd5e5f52526bcbe30c222b00</link>
<description/>
<size>231522</size>
</item><item>
<title>Learning to Combine Motor Primitives Via Greedy Additive Regression</title>
<category>Paper</category>
<infohash>66a93322d0218625ab3e86c285af5e15bf051c09</infohash>
<guid>https://academictorrents.com/details/66a93322d0218625ab3e86c285af5e15bf051c09</guid>
<link>https://academictorrents.com/details/66a93322d0218625ab3e86c285af5e15bf051c09</link>
<description/>
<size>1883581</size>
</item><item>
<title>An Error Bound Based on a Worst Likely Assignment</title>
<category>Paper</category>
<infohash>b731c15f0c42e454ec96775d8f3a6ddcd6409ff9</infohash>
<guid>https://academictorrents.com/details/b731c15f0c42e454ec96775d8f3a6ddcd6409ff9</guid>
<link>https://academictorrents.com/details/b731c15f0c42e454ec96775d8f3a6ddcd6409ff9</link>
<description/>
<size>163133</size>
</item><item>
<title>Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data</title>
<category>Paper</category>
<infohash>76ea1046a70ebe6bfacd53fee64ed6e22835391d</infohash>
<guid>https://academictorrents.com/details/76ea1046a70ebe6bfacd53fee64ed6e22835391d</guid>
<link>https://academictorrents.com/details/76ea1046a70ebe6bfacd53fee64ed6e22835391d</link>
<description/>
<size>284292</size>
</item><item>
<title>Value Function Based Reinforcement Learning in Changing Markovian Environments</title>
<category>Paper</category>
<infohash>26d1ed2e2fae81bb304ee38baeb6ac7da155d7ed</infohash>
<guid>https://academictorrents.com/details/26d1ed2e2fae81bb304ee38baeb6ac7da155d7ed</guid>
<link>https://academictorrents.com/details/26d1ed2e2fae81bb304ee38baeb6ac7da155d7ed</link>
<description/>
<size>382697</size>
</item><item>
<title>Learning Balls of Strings from Edit Corrections</title>
<category>Paper</category>
<infohash>6e8a90310cafa2984f6d6754ad83a60d5b35e5f2</infohash>
<guid>https://academictorrents.com/details/6e8a90310cafa2984f6d6754ad83a60d5b35e5f2</guid>
<link>https://academictorrents.com/details/6e8a90310cafa2984f6d6754ad83a60d5b35e5f2</link>
<description/>
<size>648887</size>
</item><item>
<title>A Moment Bound for Multi-hinge Classifiers</title>
<category>Paper</category>
<infohash>ebe3027120419d2ae25dee559946f954b0271be0</infohash>
<guid>https://academictorrents.com/details/ebe3027120419d2ae25dee559946f954b0271be0</guid>
<link>https://academictorrents.com/details/ebe3027120419d2ae25dee559946f954b0271be0</link>
<description/>
<size>4119696</size>
</item><item>
<title>Finding Optimal Bayesian Network Given a Super-Structure</title>
<category>Paper</category>
<infohash>59975e2fe7358266a56dcf58b568c8792749dfaa</infohash>
<guid>https://academictorrents.com/details/59975e2fe7358266a56dcf58b568c8792749dfaa</guid>
<link>https://academictorrents.com/details/59975e2fe7358266a56dcf58b568c8792749dfaa</link>
<description/>
<size>356667</size>
</item><item>
<title>Bayesian Inference and Optimal Design for the Sparse Linear Model</title>
<category>Paper</category>
<infohash>cd9896d409b4ff16a1c2bcb1d3c51ec2147787ae</infohash>
<guid>https://academictorrents.com/details/cd9896d409b4ff16a1c2bcb1d3c51ec2147787ae</guid>
<link>https://academictorrents.com/details/cd9896d409b4ff16a1c2bcb1d3c51ec2147787ae</link>
<description/>
<size>448970</size>
</item><item>
<title>Randomized Online PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension</title>
<category>Paper</category>
<infohash>1b397cb54c2bb0378b38b64935ea868a8a109183</infohash>
<guid>https://academictorrents.com/details/1b397cb54c2bb0378b38b64935ea868a8a109183</guid>
<link>https://academictorrents.com/details/1b397cb54c2bb0378b38b64935ea868a8a109183</link>
<description/>
<size>306689</size>
</item><item>
<title>Learning from Multiple Sources</title>
<category>Paper</category>
<infohash>a91d148d082332f0c0b2231c6a08cc7c2869bf69</infohash>
<guid>https://academictorrents.com/details/a91d148d082332f0c0b2231c6a08cc7c2869bf69</guid>
<link>https://academictorrents.com/details/a91d148d082332f0c0b2231c6a08cc7c2869bf69</link>
<description/>
<size>510004</size>
</item><item>
<title>SimpleMKL</title>
<category>Paper</category>
<infohash>9a7b033a7a12876bb4b7b2376d9b28e1518cbe99</infohash>
<guid>https://academictorrents.com/details/9a7b033a7a12876bb4b7b2376d9b28e1518cbe99</guid>
<link>https://academictorrents.com/details/9a7b033a7a12876bb4b7b2376d9b28e1518cbe99</link>
<description/>
<size>843949</size>
</item><item>
<title>Finite-Time Bounds for Fitted Value Iteration</title>
<category>Paper</category>
<infohash>5e3d93e553bb1471433e36e94ad686d07f2b3eca</infohash>
<guid>https://academictorrents.com/details/5e3d93e553bb1471433e36e94ad686d07f2b3eca</guid>
<link>https://academictorrents.com/details/5e3d93e553bb1471433e36e94ad686d07f2b3eca</link>
<description/>
<size>54732</size>
</item><item>
<title>On the Suitable Domain for SVM Training in Image Coding</title>
<category>Paper</category>
<infohash>81a47ae86153d940580a01017b040da445bc28da</infohash>
<guid>https://academictorrents.com/details/81a47ae86153d940580a01017b040da445bc28da</guid>
<link>https://academictorrents.com/details/81a47ae86153d940580a01017b040da445bc28da</link>
<description/>
<size>521356</size>
</item><item>
<title>On the Size and Recovery of Submatrices of Ones in a Random Binary Matrix</title>
<category>Paper</category>
<infohash>8395af658162b29ab6723ab13822a7ca17d6341b</infohash>
<guid>https://academictorrents.com/details/8395af658162b29ab6723ab13822a7ca17d6341b</guid>
<link>https://academictorrents.com/details/8395af658162b29ab6723ab13822a7ca17d6341b</link>
<description/>
<size>856271</size>
</item><item>
<title>Learning Control Knowledge for Forward Search Planning</title>
<category>Paper</category>
<infohash>ca5b3f4c71025c801cf3383bf4ade2e3a09f2f54</infohash>
<guid>https://academictorrents.com/details/ca5b3f4c71025c801cf3383bf4ade2e3a09f2f54</guid>
<link>https://academictorrents.com/details/ca5b3f4c71025c801cf3383bf4ade2e3a09f2f54</link>
<description/>
<size>203047</size>
</item><item>
<title>On Relevant Dimensions in Kernel Feature Spaces</title>
<category>Paper</category>
<infohash>774436c940f047c67a4c7266b56706f0a0725f76</infohash>
<guid>https://academictorrents.com/details/774436c940f047c67a4c7266b56706f0a0725f76</guid>
<link>https://academictorrents.com/details/774436c940f047c67a4c7266b56706f0a0725f76</link>
<description/>
<size>102561</size>
</item><item>
<title>Linear-Time Computation of Similarity Measures for Sequential Data</title>
<category>Paper</category>
<infohash>0b94baebf5c94c37b3aac5ae0b1ea64e45712537</infohash>
<guid>https://academictorrents.com/details/0b94baebf5c94c37b3aac5ae0b1ea64e45712537</guid>
<link>https://academictorrents.com/details/0b94baebf5c94c37b3aac5ae0b1ea64e45712537</link>
<description/>
<size>533677</size>
</item><item>
<title>Trust Region Newton Method for Logistic Regression</title>
<category>Paper</category>
<infohash>6c04924c144aa652a22729992fc90c87029fbd5e</infohash>
<guid>https://academictorrents.com/details/6c04924c144aa652a22729992fc90c87029fbd5e</guid>
<link>https://academictorrents.com/details/6c04924c144aa652a22729992fc90c87029fbd5e</link>
<description/>
<size>48849</size>
</item><item>
<title>Estimating the Confidence Interval for Prediction Errors of Support Vector Machine Classifiers</title>
<category>Paper</category>
<infohash>903a5ecb011e8d9af24933ebe07cbe50e1f07be2</infohash>
<guid>https://academictorrents.com/details/903a5ecb011e8d9af24933ebe07cbe50e1f07be2</guid>
<link>https://academictorrents.com/details/903a5ecb011e8d9af24933ebe07cbe50e1f07be2</link>
<description/>
<size>449201</size>
</item><item>
<title>Gradient Tree Boosting for Training Conditional Random Fields</title>
<category>Paper</category>
<infohash>7199cf9eaca3c451860febeafe48565ea1b35a54</infohash>
<guid>https://academictorrents.com/details/7199cf9eaca3c451860febeafe48565ea1b35a54</guid>
<link>https://academictorrents.com/details/7199cf9eaca3c451860febeafe48565ea1b35a54</link>
<description/>
<size>2504832</size>
</item><item>
<title>Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks</title>
<category>Paper</category>
<infohash>07778ccabf041692ac10728350a99310ad4bcce0</infohash>
<guid>https://academictorrents.com/details/07778ccabf041692ac10728350a99310ad4bcce0</guid>
<link>https://academictorrents.com/details/07778ccabf041692ac10728350a99310ad4bcce0</link>
<description/>
<size>252227</size>
</item><item>
<title>Active Learning by Spherical Subdivision</title>
<category>Paper</category>
<infohash>9e38b71ba22a194b1ed0008950d2fef430f7b8e8</infohash>
<guid>https://academictorrents.com/details/9e38b71ba22a194b1ed0008950d2fef430f7b8e8</guid>
<link>https://academictorrents.com/details/9e38b71ba22a194b1ed0008950d2fef430f7b8e8</link>
<description/>
<size>560527</size>
</item><item>
<title>Nearly Uniform Validation Improves Compression-Based Error Bounds</title>
<category>Paper</category>
<infohash>6dee60ca650e69104ca00cddcfa37d1bf4f24662</infohash>
<guid>https://academictorrents.com/details/6dee60ca650e69104ca00cddcfa37d1bf4f24662</guid>
<link>https://academictorrents.com/details/6dee60ca650e69104ca00cddcfa37d1bf4f24662</link>
<description/>
<size>194191</size>
</item><item>
<title>Discriminative Learning of Max-Sum Classifiers</title>
<category>Paper</category>
<infohash>375896cb5a477b789da29b77101d39cbda5ac916</infohash>
<guid>https://academictorrents.com/details/375896cb5a477b789da29b77101d39cbda5ac916</guid>
<link>https://academictorrents.com/details/375896cb5a477b789da29b77101d39cbda5ac916</link>
<description/>
<size>1533844</size>
</item><item>
<title>Multi-class Discriminant Kernel Learning via Convex Programming(Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>2564fc98f00cee827e978026078e996d71540995</infohash>
<guid>https://academictorrents.com/details/2564fc98f00cee827e978026078e996d71540995</guid>
<link>https://academictorrents.com/details/2564fc98f00cee827e978026078e996d71540995</link>
<description/>
<size>269352</size>
</item><item>
<title>Automatic PCA Dimension Selection for High Dimensional Data and Small Sample Sizes</title>
<category>Paper</category>
<infohash>d5c9aa310bbaa11f7ce67c5293e760e558207019</infohash>
<guid>https://academictorrents.com/details/d5c9aa310bbaa11f7ce67c5293e760e558207019</guid>
<link>https://academictorrents.com/details/d5c9aa310bbaa11f7ce67c5293e760e558207019</link>
<description/>
<size>3716712</size>
</item><item>
<title>Dynamic Hierarchical Markov Random Fields for Integrated Web Data Extraction</title>
<category>Paper</category>
<infohash>64a22f796592c79060c833e8f4f43b57a58e22df</infohash>
<guid>https://academictorrents.com/details/64a22f796592c79060c833e8f4f43b57a58e22df</guid>
<link>https://academictorrents.com/details/64a22f796592c79060c833e8f4f43b57a58e22df</link>
<description/>
<size>441498</size>
</item><item>
<title>Ranking Categorical Features Using Generalization Properties</title>
<category>Paper</category>
<infohash>944422ba09b0a06697855b56bdb2cbb8d7d8c399</infohash>
<guid>https://academictorrents.com/details/944422ba09b0a06697855b56bdb2cbb8d7d8c399</guid>
<link>https://academictorrents.com/details/944422ba09b0a06697855b56bdb2cbb8d7d8c399</link>
<description/>
<size>449954</size>
</item><item>
<title>Visualizing Data using t-SNE</title>
<category>Paper</category>
<infohash>9637de2f50952d9d8f52ac301ef04adef7fc7e4e</infohash>
<guid>https://academictorrents.com/details/9637de2f50952d9d8f52ac301ef04adef7fc7e4e</guid>
<link>https://academictorrents.com/details/9637de2f50952d9d8f52ac301ef04adef7fc7e4e</link>
<description/>
<size>254269</size>
</item><item>
<title>Non-Parametric Modeling of Partially Ranked Data</title>
<category>Paper</category>
<infohash>530abae83beab7b0c28ada2b2e0c05c501368e39</infohash>
<guid>https://academictorrents.com/details/530abae83beab7b0c28ada2b2e0c05c501368e39</guid>
<link>https://academictorrents.com/details/530abae83beab7b0c28ada2b2e0c05c501368e39</link>
<description/>
<size>390105</size>
</item><item>
<title>Learning Similarity with Operator-valued Large-margin Classifiers</title>
<category>Paper</category>
<infohash>fe7472920bcae4ff936715f160f4d19a91500a45</infohash>
<guid>https://academictorrents.com/details/fe7472920bcae4ff936715f160f4d19a91500a45</guid>
<link>https://academictorrents.com/details/fe7472920bcae4ff936715f160f4d19a91500a45</link>
<description/>
<size>197461</size>
</item><item>
<title>Learning to Select Features using their Properties</title>
<category>Paper</category>
<infohash>0baade895c487724dc4e1bb3c00ac01436c9f767</infohash>
<guid>https://academictorrents.com/details/0baade895c487724dc4e1bb3c00ac01436c9f767</guid>
<link>https://academictorrents.com/details/0baade895c487724dc4e1bb3c00ac01436c9f767</link>
<description/>
<size>137755</size>
</item><item>
<title>A Multiple Instance Learning Strategy for Combating Good Word Attacks on Spam Filters</title>
<category>Paper</category>
<infohash>72be7eb406e778d33a811ee00b4c2f16bb298670</infohash>
<guid>https://academictorrents.com/details/72be7eb406e778d33a811ee00b4c2f16bb298670</guid>
<link>https://academictorrents.com/details/72be7eb406e778d33a811ee00b4c2f16bb298670</link>
<description/>
<size>281478</size>
</item><item>
<title>HPB: A Model for Handling BN Nodes with High Cardinality Parents</title>
<category>Paper</category>
<infohash>dd8607dea92a1f40c2fbf0ced40717a4101d15ef</infohash>
<guid>https://academictorrents.com/details/dd8607dea92a1f40c2fbf0ced40717a4101d15ef</guid>
<link>https://academictorrents.com/details/dd8607dea92a1f40c2fbf0ced40717a4101d15ef</link>
<description/>
<size>306205</size>
</item><item>
<title>Model Selection in Kernel Based Regression using the Influence Function(Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>0db504995fe64f0dca400a11f146ba007da5bcce</infohash>
<guid>https://academictorrents.com/details/0db504995fe64f0dca400a11f146ba007da5bcce</guid>
<link>https://academictorrents.com/details/0db504995fe64f0dca400a11f146ba007da5bcce</link>
<description/>
<size>264412</size>
</item><item>
<title>Minimal Nonlinear Distortion Principle for Nonlinear Independent Component Analysis</title>
<category>Paper</category>
<infohash>04b87a940d20054d7b8e8517b07594be39a22451</infohash>
<guid>https://academictorrents.com/details/04b87a940d20054d7b8e8517b07594be39a22451</guid>
<link>https://academictorrents.com/details/04b87a940d20054d7b8e8517b07594be39a22451</link>
<description/>
<size>2913342</size>
</item><item>
<title>Complete Identification Methods for the Causal Hierarchy(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>494f67d57abb4a2b2a1569d0957c8465737e35a2</infohash>
<guid>https://academictorrents.com/details/494f67d57abb4a2b2a1569d0957c8465737e35a2</guid>
<link>https://academictorrents.com/details/494f67d57abb4a2b2a1569d0957c8465737e35a2</link>
<description/>
<size>495052</size>
</item><item>
<title>On the Consistency of Multiclass Classification Methods (Special Topic on the Conference on Learning Theory 2005)</title>
<category>Paper</category>
<infohash>7858fdf307d9fe94aeaaeaeadfc554988b80a3ce</infohash>
<guid>https://academictorrents.com/details/7858fdf307d9fe94aeaaeaeadfc554988b80a3ce</guid>
<link>https://academictorrents.com/details/7858fdf307d9fe94aeaaeaeadfc554988b80a3ce</link>
<description/>
<size>174763</size>
</item><item>
<title>A Stochastic Algorithm for Feature Selection in Pattern Recognition</title>
<category>Paper</category>
<infohash>cdb0f212f1ab2bfc77c8280174c2a32bfc640cf0</infohash>
<guid>https://academictorrents.com/details/cdb0f212f1ab2bfc77c8280174c2a32bfc640cf0</guid>
<link>https://academictorrents.com/details/cdb0f212f1ab2bfc77c8280174c2a32bfc640cf0</link>
<description/>
<size>438674</size>
</item><item>
<title>Markov Properties for Linear Causal Models with Correlated Errors(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>91c569762ad4c2cee8abb9afab37b8e5cff93f6a</infohash>
<guid>https://academictorrents.com/details/91c569762ad4c2cee8abb9afab37b8e5cff93f6a</guid>
<link>https://academictorrents.com/details/91c569762ad4c2cee8abb9afab37b8e5cff93f6a</link>
<description/>
<size>231172</size>
</item><item>
<title>Maximum Entropy Discrimination Markov Networks</title>
<category>Paper</category>
<infohash>a8bee85a0b8dcd390e3623c7db0a27f43222659b</infohash>
<guid>https://academictorrents.com/details/a8bee85a0b8dcd390e3623c7db0a27f43222659b</guid>
<link>https://academictorrents.com/details/a8bee85a0b8dcd390e3623c7db0a27f43222659b</link>
<description/>
<size>593202</size>
</item><item>
<title>On Efficient Large Margin Semisupervised Learning: Method and Theory</title>
<category>Paper</category>
<infohash>7d806d1b2f4361971692c59b3862d2bad741ce78</infohash>
<guid>https://academictorrents.com/details/7d806d1b2f4361971692c59b3862d2bad741ce78</guid>
<link>https://academictorrents.com/details/7d806d1b2f4361971692c59b3862d2bad741ce78</link>
<description/>
<size>630897</size>
</item><item>
<title>Concentration Bounds for Unigram Language Models</title>
<category>Paper</category>
<infohash>c13eb0babbc742a76ffbddc13ff61cba5f32e8e5</infohash>
<guid>https://academictorrents.com/details/c13eb0babbc742a76ffbddc13ff61cba5f32e8e5</guid>
<link>https://academictorrents.com/details/c13eb0babbc742a76ffbddc13ff61cba5f32e8e5</link>
<description/>
<size>219415</size>
</item><item>
<title>Exact Simplification of Support Vector Solutions (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>b0441320a6eeb7f16b4dbad0a762d48bbcbdb5cd</infohash>
<guid>https://academictorrents.com/details/b0441320a6eeb7f16b4dbad0a762d48bbcbdb5cd</guid>
<link>https://academictorrents.com/details/b0441320a6eeb7f16b4dbad0a762d48bbcbdb5cd</link>
<description/>
<size>332061</size>
</item><item>
<title>A Near-Optimal Algorithm for Differentially-Private Principal Components</title>
<category>Paper</category>
<infohash>968f1af6a402ac653871675029d2bf9444d5d3aa</infohash>
<guid>https://academictorrents.com/details/968f1af6a402ac653871675029d2bf9444d5d3aa</guid>
<link>https://academictorrents.com/details/968f1af6a402ac653871675029d2bf9444d5d3aa</link>
<description/>
<size>424726</size>
</item><item>
<title>Regression on Fixed-Rank Positive Semidefinite Matrices: A Riemannian Approach</title>
<category>Paper</category>
<infohash>881e10e2a8109bc3c4c343e199fafdce1ba47e72</infohash>
<guid>https://academictorrents.com/details/881e10e2a8109bc3c4c343e199fafdce1ba47e72</guid>
<link>https://academictorrents.com/details/881e10e2a8109bc3c4c343e199fafdce1ba47e72</link>
<description/>
<size>340672</size>
</item><item>
<title>Statistical Consistency of Kernel Canonical Correlation Analysis</title>
<category>Paper</category>
<infohash>addad9344cac172f38057eac4c68c2dc19ae1813</infohash>
<guid>https://academictorrents.com/details/addad9344cac172f38057eac4c68c2dc19ae1813</guid>
<link>https://academictorrents.com/details/addad9344cac172f38057eac4c68c2dc19ae1813</link>
<description/>
<size>367796</size>
</item><item>
<title>Kernel Analysis of Deep Networks</title>
<category>Paper</category>
<infohash>5975549bc2e7cc159af7906844199fc8ac59c425</infohash>
<guid>https://academictorrents.com/details/5975549bc2e7cc159af7906844199fc8ac59c425</guid>
<link>https://academictorrents.com/details/5975549bc2e7cc159af7906844199fc8ac59c425</link>
<description/>
<size>762619</size>
</item><item>
<title>Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation</title>
<category>Paper</category>
<infohash>987a4a59d5e3361d945c18d53ccc12beabb05598</infohash>
<guid>https://academictorrents.com/details/987a4a59d5e3361d945c18d53ccc12beabb05598</guid>
<link>https://academictorrents.com/details/987a4a59d5e3361d945c18d53ccc12beabb05598</link>
<description/>
<size>140340</size>
</item><item>
<title>Entropy Inference and the James-Stein Estimator, with Application to Nonlinear Gene Association Networks</title>
<category>Paper</category>
<infohash>f829b5170f398d4967cb73c21017e35d3f4ca234</infohash>
<guid>https://academictorrents.com/details/f829b5170f398d4967cb73c21017e35d3f4ca234</guid>
<link>https://academictorrents.com/details/f829b5170f398d4967cb73c21017e35d3f4ca234</link>
<description/>
<size>147291</size>
</item><item>
<title>Superior Guarantees for Sequential Prediction and Lossless Compression via Alphabet Decomposition</title>
<category>Paper</category>
<infohash>42c544de8a7f3e8a6819431fec63283f1add5fe0</infohash>
<guid>https://academictorrents.com/details/42c544de8a7f3e8a6819431fec63283f1add5fe0</guid>
<link>https://academictorrents.com/details/42c544de8a7f3e8a6819431fec63283f1add5fe0</link>
<description/>
<size>271670</size>
</item><item>
<title>Efficient Margin Maximizing with Boosting</title>
<category>Paper</category>
<infohash>0a1853a62449b6b58acdcf59914841735ffbb8e6</infohash>
<guid>https://academictorrents.com/details/0a1853a62449b6b58acdcf59914841735ffbb8e6</guid>
<link>https://academictorrents.com/details/0a1853a62449b6b58acdcf59914841735ffbb8e6</link>
<description/>
<size>605342</size>
</item><item>
<title>Efficient and Effective Visual Codebook Generation Using Additive Kernels</title>
<category>Paper</category>
<infohash>7b4b3bbdc638189b68335bfbea6a25ba5fddd0bc</infohash>
<guid>https://academictorrents.com/details/7b4b3bbdc638189b68335bfbea6a25ba5fddd0bc</guid>
<link>https://academictorrents.com/details/7b4b3bbdc638189b68335bfbea6a25ba5fddd0bc</link>
<description/>
<size>165795</size>
</item><item>
<title>One-Class Novelty Detection for Seizure Analysis from Intracranial EEG</title>
<category>Paper</category>
<infohash>82f2ad2030b66cdb205fa6726dfcb13413963ae0</infohash>
<guid>https://academictorrents.com/details/82f2ad2030b66cdb205fa6726dfcb13413963ae0</guid>
<link>https://academictorrents.com/details/82f2ad2030b66cdb205fa6726dfcb13413963ae0</link>
<description/>
<size>276869</size>
</item><item>
<title>One-Class SVMs for Document Classification (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>8bb752460e3ddecdf5c6e97aef7c4f6300429f35</infohash>
<guid>https://academictorrents.com/details/8bb752460e3ddecdf5c6e97aef7c4f6300429f35</guid>
<link>https://academictorrents.com/details/8bb752460e3ddecdf5c6e97aef7c4f6300429f35</link>
<description/>
<size>326603</size>
</item><item>
<title>Recommender Systems Using Linear Classifiers</title>
<category>Paper</category>
<infohash>540a23376d40d553340e7b44a2ce99efac5215d7</infohash>
<guid>https://academictorrents.com/details/540a23376d40d553340e7b44a2ce99efac5215d7</guid>
<link>https://academictorrents.com/details/540a23376d40d553340e7b44a2ce99efac5215d7</link>
<description/>
<size>248116</size>
</item><item>
<title>On Equivalence Relationships Between Classification and Ranking Algorithms</title>
<category>Paper</category>
<infohash>67298f2c214fabd83eab7dd209d6c010e91e040a</infohash>
<guid>https://academictorrents.com/details/67298f2c214fabd83eab7dd209d6c010e91e040a</guid>
<link>https://academictorrents.com/details/67298f2c214fabd83eab7dd209d6c010e91e040a</link>
<description/>
<size>376989</size>
</item><item>
<title>Characterizing the Function Space for Bayesian Kernel Models</title>
<category>Paper</category>
<infohash>1ea04a9b83c5a3e90ca3b6ec6db8fd330d6547c6</infohash>
<guid>https://academictorrents.com/details/1ea04a9b83c5a3e90ca3b6ec6db8fd330d6547c6</guid>
<link>https://academictorrents.com/details/1ea04a9b83c5a3e90ca3b6ec6db8fd330d6547c6</link>
<description/>
<size>228847</size>
</item><item>
<title>The Locally Weighted Bag of Words Framework for Document Representation</title>
<category>Paper</category>
<infohash>b0f8ae49890d4cf83f1f0d2deb274725dba71e32</infohash>
<guid>https://academictorrents.com/details/b0f8ae49890d4cf83f1f0d2deb274725dba71e32</guid>
<link>https://academictorrents.com/details/b0f8ae49890d4cf83f1f0d2deb274725dba71e32</link>
<description/>
<size>1737461</size>
</item><item>
<title>The Stationary Subspace Analysis Toolbox</title>
<category>Paper</category>
<infohash>f3210a6851c50793e0b96e20867b5c717cf6d489</infohash>
<guid>https://academictorrents.com/details/f3210a6851c50793e0b96e20867b5c717cf6d489</guid>
<link>https://academictorrents.com/details/f3210a6851c50793e0b96e20867b5c717cf6d489</link>
<description/>
<size>70419</size>
</item><item>
<title>Unlabeled Compression Schemes for Maximum Classes</title>
<category>Paper</category>
<infohash>f939f605369d0dc11ec8ccaf6f472911be5c56d3</infohash>
<guid>https://academictorrents.com/details/f939f605369d0dc11ec8ccaf6f472911be5c56d3</guid>
<link>https://academictorrents.com/details/f939f605369d0dc11ec8ccaf6f472911be5c56d3</link>
<description/>
<size>457234</size>
</item><item>
<title>Two Distributed-State Models For Generating High-Dimensional Time Series</title>
<category>Paper</category>
<infohash>d044760b9925a1eb2861475316f5938a6141a2ef</infohash>
<guid>https://academictorrents.com/details/d044760b9925a1eb2861475316f5938a6141a2ef</guid>
<link>https://academictorrents.com/details/d044760b9925a1eb2861475316f5938a6141a2ef</link>
<description/>
<size>866010</size>
</item><item>
<title>Fast Iterative Kernel Principal Component Analysis</title>
<category>Paper</category>
<infohash>a63f5649a1288237c6f5d3904c1999a8c889ab90</infohash>
<guid>https://academictorrents.com/details/a63f5649a1288237c6f5d3904c1999a8c889ab90</guid>
<link>https://academictorrents.com/details/a63f5649a1288237c6f5d3904c1999a8c889ab90</link>
<description/>
<size>3186111</size>
</item><item>
<title>Nonparametric Quantile Estimation</title>
<category>Paper</category>
<infohash>8cbd0b0a61916754b4346403670e203f8712437b</infohash>
<guid>https://academictorrents.com/details/8cbd0b0a61916754b4346403670e203f8712437b</guid>
<link>https://academictorrents.com/details/8cbd0b0a61916754b4346403670e203f8712437b</link>
<description/>
<size>904720</size>
</item><item>
<title>Provably Efficient Learning with Typed Parametric Models</title>
<category>Paper</category>
<infohash>aaa2d79273c5aa9e6e5a3d88d5b23f31b1b42884</infohash>
<guid>https://academictorrents.com/details/aaa2d79273c5aa9e6e5a3d88d5b23f31b1b42884</guid>
<link>https://academictorrents.com/details/aaa2d79273c5aa9e6e5a3d88d5b23f31b1b42884</link>
<description/>
<size>2486378</size>
</item><item>
<title>Machine Learning for Computer Security(Special Topic on Machine Learning for Computer Security)</title>
<category>Paper</category>
<infohash>3eced34cd948e7ea92f31ded3e0fd734274fee4a</infohash>
<guid>https://academictorrents.com/details/3eced34cd948e7ea92f31ded3e0fd734274fee4a</guid>
<link>https://academictorrents.com/details/3eced34cd948e7ea92f31ded3e0fd734274fee4a</link>
<description/>
<size>59935</size>
</item><item>
<title>Weisfeiler-Lehman Graph Kernels</title>
<category>Paper</category>
<infohash>cd8ab06b42bef8ed5f3f2fdff35e91caad1ee17f</infohash>
<guid>https://academictorrents.com/details/cd8ab06b42bef8ed5f3f2fdff35e91caad1ee17f</guid>
<link>https://academictorrents.com/details/cd8ab06b42bef8ed5f3f2fdff35e91caad1ee17f</link>
<description/>
<size>299278</size>
</item><item>
<title>Incremental Algorithms for Hierarchical Classification</title>
<category>Paper</category>
<infohash>8e9fbb3ea861d404bb3ad130f2aa9c970ecc4778</infohash>
<guid>https://academictorrents.com/details/8e9fbb3ea861d404bb3ad130f2aa9c970ecc4778</guid>
<link>https://academictorrents.com/details/8e9fbb3ea861d404bb3ad130f2aa9c970ecc4778</link>
<description/>
<size>183489</size>
</item><item>
<title>Machine Learning with Data Dependent Hypothesis Classes</title>
<category>Paper</category>
<infohash>6ad9ad36edba1929e716dd78101cd00e3fbf6376</infohash>
<guid>https://academictorrents.com/details/6ad9ad36edba1929e716dd78101cd00e3fbf6376</guid>
<link>https://academictorrents.com/details/6ad9ad36edba1929e716dd78101cd00e3fbf6376</link>
<description/>
<size>145270</size>
</item><item>
<title>Learning Spectral Clustering, With Application To Speech Separation</title>
<category>Paper</category>
<infohash>ceb026d186ec8a3a36151226d8bf3b939d313018</infohash>
<guid>https://academictorrents.com/details/ceb026d186ec8a3a36151226d8bf3b939d313018</guid>
<link>https://academictorrents.com/details/ceb026d186ec8a3a36151226d8bf3b939d313018</link>
<description/>
<size>660254</size>
</item><item>
<title>Better Algorithms for Benign Bandits</title>
<category>Paper</category>
<infohash>aafc09fd9ad321c97aa4a22a5fe9430375c7323e</infohash>
<guid>https://academictorrents.com/details/aafc09fd9ad321c97aa4a22a5fe9430375c7323e</guid>
<link>https://academictorrents.com/details/aafc09fd9ad321c97aa4a22a5fe9430375c7323e</link>
<description/>
<size>186296</size>
</item><item>
<title>Adaptive Exact Inference in Graphical Models</title>
<category>Paper</category>
<infohash>ad910eb28d6a12ce53198743901eb3aac2d6f3d2</infohash>
<guid>https://academictorrents.com/details/ad910eb28d6a12ce53198743901eb3aac2d6f3d2</guid>
<link>https://academictorrents.com/details/ad910eb28d6a12ce53198743901eb3aac2d6f3d2</link>
<description/>
<size>644271</size>
</item><item>
<title>AdaBoost is Consistent</title>
<category>Paper</category>
<infohash>0eb2fb76eb57bfb05278dcecc6a6b2a297d65dfd</infohash>
<guid>https://academictorrents.com/details/0eb2fb76eb57bfb05278dcecc6a6b2a297d65dfd</guid>
<link>https://academictorrents.com/details/0eb2fb76eb57bfb05278dcecc6a6b2a297d65dfd</link>
<description/>
<size>184121</size>
</item><item>
<title>Clustering with Bregman Divergences</title>
<category>Paper</category>
<infohash>db239d92ba00c26cab551146d29f9ce404de4206</infohash>
<guid>https://academictorrents.com/details/db239d92ba00c26cab551146d29f9ce404de4206</guid>
<link>https://academictorrents.com/details/db239d92ba00c26cab551146d29f9ce404de4206</link>
<description/>
<size>342744</size>
</item><item>
<title>Strong Limit Theorems for the Bayesian Scoring Criterion in Bayesian Networks</title>
<category>Paper</category>
<infohash>86baaa1e04744d1efdd63b5627f0cde988b82aae</infohash>
<guid>https://academictorrents.com/details/86baaa1e04744d1efdd63b5627f0cde988b82aae</guid>
<link>https://academictorrents.com/details/86baaa1e04744d1efdd63b5627f0cde988b82aae</link>
<description/>
<size>120693</size>
</item><item>
<title>Support Vector Clustering (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>676311e19d559eff4c0d65ed23abb7fd143a03d3</infohash>
<guid>https://academictorrents.com/details/676311e19d559eff4c0d65ed23abb7fd143a03d3</guid>
<link>https://academictorrents.com/details/676311e19d559eff4c0d65ed23abb7fd143a03d3</link>
<description/>
<size>326517</size>
</item><item>
<title>Considering Cost Asymmetry in Learning Classifiers</title>
<category>Paper</category>
<infohash>a9bd8db4e6ddb322259cacaa0c0668f7b6fc1290</infohash>
<guid>https://academictorrents.com/details/a9bd8db4e6ddb322259cacaa0c0668f7b6fc1290</guid>
<link>https://academictorrents.com/details/a9bd8db4e6ddb322259cacaa0c0668f7b6fc1290</link>
<description/>
<size>355115</size>
</item><item>
<title>Semigroup Kernels on Measures</title>
<category>Paper</category>
<infohash>50c2c345aacf27cf1065fc09716eb8259b3dcc51</infohash>
<guid>https://academictorrents.com/details/50c2c345aacf27cf1065fc09716eb8259b3dcc51</guid>
<link>https://academictorrents.com/details/50c2c345aacf27cf1065fc09716eb8259b3dcc51</link>
<description/>
<size>379052</size>
</item><item>
<title>Universal Kernels</title>
<category>Paper</category>
<infohash>966332ffa949ca565e874c3b0c4db39b3bddf2b4</infohash>
<guid>https://academictorrents.com/details/966332ffa949ca565e874c3b0c4db39b3bddf2b4</guid>
<link>https://academictorrents.com/details/966332ffa949ca565e874c3b0c4db39b3bddf2b4</link>
<description/>
<size>140166</size>
</item><item>
<title>Learning Sparse Representations by Non-Negative Matrix Factorization and Sequential Cone Programming (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>b430633dd781b7b7357597c66666f9ea73b7be0a</infohash>
<guid>https://academictorrents.com/details/b430633dd781b7b7357597c66666f9ea73b7be0a</guid>
<link>https://academictorrents.com/details/b430633dd781b7b7357597c66666f9ea73b7be0a</link>
<description/>
<size>254488</size>
</item><item>
<title>Model Monitor (M2): Evaluating, Comparing, and Monitoring Models(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>537f6b864879a1141c725ccdb56b78efe10bb5d7</infohash>
<guid>https://academictorrents.com/details/537f6b864879a1141c725ccdb56b78efe10bb5d7</guid>
<link>https://academictorrents.com/details/537f6b864879a1141c725ccdb56b78efe10bb5d7</link>
<description/>
<size>169710</size>
</item><item>
<title>Structure Spaces</title>
<category>Paper</category>
<infohash>61cf92536eea9604d8755af5ee0e712e718f0692</infohash>
<guid>https://academictorrents.com/details/61cf92536eea9604d8755af5ee0e712e718f0692</guid>
<link>https://academictorrents.com/details/61cf92536eea9604d8755af5ee0e712e718f0692</link>
<description/>
<size>669799</size>
</item><item>
<title>Bilinear Discriminant Component Analysis</title>
<category>Paper</category>
<infohash>4a632373b4f3d101d2991d50a06c30b28f58c9b9</infohash>
<guid>https://academictorrents.com/details/4a632373b4f3d101d2991d50a06c30b28f58c9b9</guid>
<link>https://academictorrents.com/details/4a632373b4f3d101d2991d50a06c30b28f58c9b9</link>
<description/>
<size>669205</size>
</item><item>
<title>Linear State-Space Models for Blind Source Separation</title>
<category>Paper</category>
<infohash>1d93e447c4599e8d41257cb05574b9fc81b2d1ce</infohash>
<guid>https://academictorrents.com/details/1d93e447c4599e8d41257cb05574b9fc81b2d1ce</guid>
<link>https://academictorrents.com/details/1d93e447c4599e8d41257cb05574b9fc81b2d1ce</link>
<description/>
<size>325019</size>
</item><item>
<title>An Anticorrelation Kernel for Subsystem Training in Multiple Classifier Systems</title>
<category>Paper</category>
<infohash>c684ecfe5efb99fd7b4dfeba356e761dbe7d9821</infohash>
<guid>https://academictorrents.com/details/c684ecfe5efb99fd7b4dfeba356e761dbe7d9821</guid>
<link>https://academictorrents.com/details/c684ecfe5efb99fd7b4dfeba356e761dbe7d9821</link>
<description/>
<size>560990</size>
</item><item>
<title>Dynamic Weighted Majority: An Ensemble Method for Drifting Concepts</title>
<category>Paper</category>
<infohash>682e6eabe24d0007d9b35df1490e66f211379437</infohash>
<guid>https://academictorrents.com/details/682e6eabe24d0007d9b35df1490e66f211379437</guid>
<link>https://academictorrents.com/details/682e6eabe24d0007d9b35df1490e66f211379437</link>
<description/>
<size>490168</size>
</item><item>
<title>Maximum Margin Algorithms with Boolean Kernels</title>
<category>Paper</category>
<infohash>7dc07f4ef25da9cd9da379f80e6f88873b9fc6ba</infohash>
<guid>https://academictorrents.com/details/7dc07f4ef25da9cd9da379f80e6f88873b9fc6ba</guid>
<link>https://academictorrents.com/details/7dc07f4ef25da9cd9da379f80e6f88873b9fc6ba</link>
<description/>
<size>192065</size>
</item><item>
<title>Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss</title>
<category>Paper</category>
<infohash>d69e949c56375d58ede4a94869d91f971e22c59e</infohash>
<guid>https://academictorrents.com/details/d69e949c56375d58ede4a94869d91f971e22c59e</guid>
<link>https://academictorrents.com/details/d69e949c56375d58ede4a94869d91f971e22c59e</link>
<description/>
<size>122780</size>
</item><item>
<title>The Sample Complexity of Dictionary Learning</title>
<category>Paper</category>
<infohash>05ff4e0edd719b5f4665f8f2b4bda672882e6496</infohash>
<guid>https://academictorrents.com/details/05ff4e0edd719b5f4665f8f2b4bda672882e6496</guid>
<link>https://academictorrents.com/details/05ff4e0edd719b5f4665f8f2b4bda672882e6496</link>
<description/>
<size>185590</size>
</item><item>
<title>Models of Cooperative Teaching and Learning</title>
<category>Paper</category>
<infohash>8c3255d6bad6b7202880db1517458f28afe55254</infohash>
<guid>https://academictorrents.com/details/8c3255d6bad6b7202880db1517458f28afe55254</guid>
<link>https://academictorrents.com/details/8c3255d6bad6b7202880db1517458f28afe55254</link>
<description/>
<size>278164</size>
</item><item>
<title>High-dimensional Covariance Estimation Based On Gaussian Graphical Models</title>
<category>Paper</category>
<infohash>b3cd62da6ee16228b9c1065111e49e299d3ab1ae</infohash>
<guid>https://academictorrents.com/details/b3cd62da6ee16228b9c1065111e49e299d3ab1ae</guid>
<link>https://academictorrents.com/details/b3cd62da6ee16228b9c1065111e49e299d3ab1ae</link>
<description/>
<size>735000</size>
</item><item>
<title>Margin-based Ranking and an Equivalence between AdaBoost and RankBoost</title>
<category>Paper</category>
<infohash>6cd0a8f32126a69e15a2cd64214f78ba1d794552</infohash>
<guid>https://academictorrents.com/details/6cd0a8f32126a69e15a2cd64214f78ba1d794552</guid>
<link>https://academictorrents.com/details/6cd0a8f32126a69e15a2cd64214f78ba1d794552</link>
<description/>
<size>420285</size>
</item><item>
<title>Variable Sparsity Kernel Learning</title>
<category>Paper</category>
<infohash>270fcf02619a09a2adc72c279551898d2e7ae330</infohash>
<guid>https://academictorrents.com/details/270fcf02619a09a2adc72c279551898d2e7ae330</guid>
<link>https://academictorrents.com/details/270fcf02619a09a2adc72c279551898d2e7ae330</link>
<description/>
<size>236150</size>
</item><item>
<title>The Need for Open Source Software in Machine Learning</title>
<category>Paper</category>
<infohash>c859121d01e0ae57fd36488c9a5c529b29815685</infohash>
<guid>https://academictorrents.com/details/c859121d01e0ae57fd36488c9a5c529b29815685</guid>
<link>https://academictorrents.com/details/c859121d01e0ae57fd36488c9a5c529b29815685</link>
<description/>
<size>1278865</size>
</item><item>
<title>Faster Algorithms for Max-Product Message-Passing</title>
<category>Paper</category>
<infohash>21b53d95668aa4a95bdde21e4cde68ff9b6d6803</infohash>
<guid>https://academictorrents.com/details/21b53d95668aa4a95bdde21e4cde68ff9b6d6803</guid>
<link>https://academictorrents.com/details/21b53d95668aa4a95bdde21e4cde68ff9b6d6803</link>
<description/>
<size>1119635</size>
</item><item>
<title>Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>36e0378030715a494a927af440defed23b2880bf</infohash>
<guid>https://academictorrents.com/details/36e0378030715a494a927af440defed23b2880bf</guid>
<link>https://academictorrents.com/details/36e0378030715a494a927af440defed23b2880bf</link>
<description/>
<size>235157</size>
</item><item>
<title>Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization</title>
<category>Paper</category>
<infohash>e0851617cd76e1cfa4de188b713af44ac40816fc</infohash>
<guid>https://academictorrents.com/details/e0851617cd76e1cfa4de188b713af44ac40816fc</guid>
<link>https://academictorrents.com/details/e0851617cd76e1cfa4de188b713af44ac40816fc</link>
<description/>
<size>1616302</size>
</item><item>
<title>Adaptive Subgradient Methods for Online Learning and Stochastic Optimization</title>
<category>Paper</category>
<infohash>d385e01673b699db102a3a362ebb4fba46ee3660</infohash>
<guid>https://academictorrents.com/details/d385e01673b699db102a3a362ebb4fba46ee3660</guid>
<link>https://academictorrents.com/details/d385e01673b699db102a3a362ebb4fba46ee3660</link>
<description/>
<size>307882</size>
</item><item>
<title>A Direct Method for Building Sparse Kernel Learning Algorithms</title>
<category>Paper</category>
<infohash>2e96deb9f1a0b27ff0c3a2fd6d89b83fa47672aa</infohash>
<guid>https://academictorrents.com/details/2e96deb9f1a0b27ff0c3a2fd6d89b83fa47672aa</guid>
<link>https://academictorrents.com/details/2e96deb9f1a0b27ff0c3a2fd6d89b83fa47672aa</link>
<description/>
<size>282870</size>
</item><item>
<title>Learning the Kernel Function via Regularization</title>
<category>Paper</category>
<infohash>6bf074135eaaea73a3e5d6e548b0b31333ab906e</infohash>
<guid>https://academictorrents.com/details/6bf074135eaaea73a3e5d6e548b0b31333ab906e</guid>
<link>https://academictorrents.com/details/6bf074135eaaea73a3e5d6e548b0b31333ab906e</link>
<description/>
<size>221869</size>
</item><item>
<title>Diffusion Kernels on Statistical Manifolds</title>
<category>Paper</category>
<infohash>4209420e28a999fa898e2305185feb3728d272e9</infohash>
<guid>https://academictorrents.com/details/4209420e28a999fa898e2305185feb3728d272e9</guid>
<link>https://academictorrents.com/details/4209420e28a999fa898e2305185feb3728d272e9</link>
<description/>
<size>1337167</size>
</item><item>
<title>A New Probabilistic Approach in Rank Regression with Optimal Bayesian Partitioning (Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>c7d9b7be3f7aaa9167cae52a23e800c0e0ef5cea</infohash>
<guid>https://academictorrents.com/details/c7d9b7be3f7aaa9167cae52a23e800c0e0ef5cea</guid>
<link>https://academictorrents.com/details/c7d9b7be3f7aaa9167cae52a23e800c0e0ef5cea</link>
<description/>
<size>1130657</size>
</item><item>
<title>Spam Filtering Based On The Analysis Of Text Information Embedded Into Images (Special Topic on Machine Learning for Computer Security)</title>
<category>Paper</category>
<infohash>ad9c48d261d03045f96d3202cf4858fc57b20699</infohash>
<guid>https://academictorrents.com/details/ad9c48d261d03045f96d3202cf4858fc57b20699</guid>
<link>https://academictorrents.com/details/ad9c48d261d03045f96d3202cf4858fc57b20699</link>
<description/>
<size>482175</size>
</item><item>
<title>Asymptotic Model Selection for Naive Bayesian Networks</title>
<category>Paper</category>
<infohash>6690e0941328f9ea8724754e91f0e73a25c85b3e</infohash>
<guid>https://academictorrents.com/details/6690e0941328f9ea8724754e91f0e73a25c85b3e</guid>
<link>https://academictorrents.com/details/6690e0941328f9ea8724754e91f0e73a25c85b3e</link>
<description/>
<size>1078131</size>
</item><item>
<title>Operator Norm Convergence of Spectral Clustering on Level Sets</title>
<category>Paper</category>
<infohash>fc7d3a5d7832e95df39a5f5b15ce5f507c7771fa</infohash>
<guid>https://academictorrents.com/details/fc7d3a5d7832e95df39a5f5b15ce5f507c7771fa</guid>
<link>https://academictorrents.com/details/fc7d3a5d7832e95df39a5f5b15ce5f507c7771fa</link>
<description/>
<size>247312</size>
</item><item>
<title>Learning Coordinate Covariances via Gradients</title>
<category>Paper</category>
<infohash>456bd90c7337fe6f76a6e2fce0374d41b5f46cbe</infohash>
<guid>https://academictorrents.com/details/456bd90c7337fe6f76a6e2fce0374d41b5f46cbe</guid>
<link>https://academictorrents.com/details/456bd90c7337fe6f76a6e2fce0374d41b5f46cbe</link>
<description/>
<size>385958</size>
</item><item>
<title>Communication-Efficient Algorithms for Statistical Optimization</title>
<category>Paper</category>
<infohash>0710d4361d4ca489c7ca6e500ccce71d7699ebe3</infohash>
<guid>https://academictorrents.com/details/0710d4361d4ca489c7ca6e500ccce71d7699ebe3</guid>
<link>https://academictorrents.com/details/0710d4361d4ca489c7ca6e500ccce71d7699ebe3</link>
<description/>
<size>381049</size>
</item><item>
<title>Polynomial-Delay Enumeration of Monotonic Graph Classes</title>
<category>Paper</category>
<infohash>0220f37a4a1cdf8b936d67cb94c4176e66502efa</infohash>
<guid>https://academictorrents.com/details/0220f37a4a1cdf8b936d67cb94c4176e66502efa</guid>
<link>https://academictorrents.com/details/0220f37a4a1cdf8b936d67cb94c4176e66502efa</link>
<description/>
<size>204986</size>
</item><item>
<title>Text Classification using String Kernels</title>
<category>Paper</category>
<infohash>4c9c48699f20f3b4b62e59ca927fef79cebdf5e0</infohash>
<guid>https://academictorrents.com/details/4c9c48699f20f3b4b62e59ca927fef79cebdf5e0</guid>
<link>https://academictorrents.com/details/4c9c48699f20f3b4b62e59ca927fef79cebdf5e0</link>
<description/>
<size>202025</size>
</item><item>
<title>Stability and Generalization</title>
<category>Paper</category>
<infohash>1c04354e3b6a4368ec412fd54de9a8e4d59ed30a</infohash>
<guid>https://academictorrents.com/details/1c04354e3b6a4368ec412fd54de9a8e4d59ed30a</guid>
<link>https://academictorrents.com/details/1c04354e3b6a4368ec412fd54de9a8e4d59ed30a</link>
<description/>
<size>210937</size>
</item><item>
<title>A Nonparametric Statistical Approach to Clustering via Mode Identification</title>
<category>Paper</category>
<infohash>076359232032191f2a858982ff463ddfe08b4ea9</infohash>
<guid>https://academictorrents.com/details/076359232032191f2a858982ff463ddfe08b4ea9</guid>
<link>https://academictorrents.com/details/076359232032191f2a858982ff463ddfe08b4ea9</link>
<description/>
<size>665624</size>
</item><item>
<title>Graph Laplacians and their Convergence on Random Neighborhood Graphs (Special Topic on the Conference on Learning Theory 2005)</title>
<category>Paper</category>
<infohash>b97150c05da979971d5f724fb88b14ab703d4b35</infohash>
<guid>https://academictorrents.com/details/b97150c05da979971d5f724fb88b14ab703d4b35</guid>
<link>https://academictorrents.com/details/b97150c05da979971d5f724fb88b14ab703d4b35</link>
<description/>
<size>2911681</size>
</item><item>
<title>Learning Recursive Control Programs from Problem Solving (Special Topic on Inductive Programming)</title>
<category>Paper</category>
<infohash>df4cf8ef981a46e6b2307e3d80c7134676a3edd2</infohash>
<guid>https://academictorrents.com/details/df4cf8ef981a46e6b2307e3d80c7134676a3edd2</guid>
<link>https://academictorrents.com/details/df4cf8ef981a46e6b2307e3d80c7134676a3edd2</link>
<description/>
<size>192325</size>
</item><item>
<title>Clustering on the Unit Hypersphere using von Mises-Fisher Distributions</title>
<category>Paper</category>
<infohash>89d043438b621c67d89d46dcca3275739c0ac799</infohash>
<guid>https://academictorrents.com/details/89d043438b621c67d89d46dcca3275739c0ac799</guid>
<link>https://academictorrents.com/details/89d043438b621c67d89d46dcca3275739c0ac799</link>
<description/>
<size>295338</size>
</item><item>
<title>Bayesian Co-Training</title>
<category>Paper</category>
<infohash>87746816742b37abaaaa19d5fe915269d46c2c27</infohash>
<guid>https://academictorrents.com/details/87746816742b37abaaaa19d5fe915269d46c2c27</guid>
<link>https://academictorrents.com/details/87746816742b37abaaaa19d5fe915269d46c2c27</link>
<description/>
<size>377649</size>
</item><item>
<title>Some Theory for Generalized Boosting Algorithms</title>
<category>Paper</category>
<infohash>8ddc6ba461fcd7847186f19b86033644dfaa61ec</infohash>
<guid>https://academictorrents.com/details/8ddc6ba461fcd7847186f19b86033644dfaa61ec</guid>
<link>https://academictorrents.com/details/8ddc6ba461fcd7847186f19b86033644dfaa61ec</link>
<description/>
<size>205596</size>
</item><item>
<title>Union Support Recovery in Multi-task Learning</title>
<category>Paper</category>
<infohash>d5ec14d9c059a661009328f8e42a8fb8114a72ae</infohash>
<guid>https://academictorrents.com/details/d5ec14d9c059a661009328f8e42a8fb8114a72ae</guid>
<link>https://academictorrents.com/details/d5ec14d9c059a661009328f8e42a8fb8114a72ae</link>
<description/>
<size>212206</size>
</item><item>
<title>Subgroup Analysis via Recursive Partitioning</title>
<category>Paper</category>
<infohash>a5a7097ea1e9a4f1eadfbaafb1c59c9ef579d0d9</infohash>
<guid>https://academictorrents.com/details/a5a7097ea1e9a4f1eadfbaafb1c59c9ef579d0d9</guid>
<link>https://academictorrents.com/details/a5a7097ea1e9a4f1eadfbaafb1c59c9ef579d0d9</link>
<description/>
<size>143832</size>
</item><item>
<title>A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis</title>
<category>Paper</category>
<infohash>43e3d8cc504a81b9f2d5169a4cdcd6b782e66c0f</infohash>
<guid>https://academictorrents.com/details/43e3d8cc504a81b9f2d5169a4cdcd6b782e66c0f</guid>
<link>https://academictorrents.com/details/43e3d8cc504a81b9f2d5169a4cdcd6b782e66c0f</link>
<description/>
<size>577246</size>
</item><item>
<title>Hierarchical Knowledge Gradient for Sequential Sampling</title>
<category>Paper</category>
<infohash>8fa61f0a8f3d7f90f077633c6519f90a112e157f</infohash>
<guid>https://academictorrents.com/details/8fa61f0a8f3d7f90f077633c6519f90a112e157f</guid>
<link>https://academictorrents.com/details/8fa61f0a8f3d7f90f077633c6519f90a112e157f</link>
<description/>
<size>594380</size>
</item><item>
<title>Boosted Classification Trees and Class ProbabilityQuantile Estimation</title>
<category>Paper</category>
<infohash>5b16ef1142ebaa902bf0b2c0e0fc5c3b5793f1ca</infohash>
<guid>https://academictorrents.com/details/5b16ef1142ebaa902bf0b2c0e0fc5c3b5793f1ca</guid>
<link>https://academictorrents.com/details/5b16ef1142ebaa902bf0b2c0e0fc5c3b5793f1ca</link>
<description/>
<size>989974</size>
</item><item>
<title>Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis</title>
<category>Paper</category>
<infohash>006d1a3aac3e2ac9ac996592e94287fe22a8dd7d</infohash>
<guid>https://academictorrents.com/details/006d1a3aac3e2ac9ac996592e94287fe22a8dd7d</guid>
<link>https://academictorrents.com/details/006d1a3aac3e2ac9ac996592e94287fe22a8dd7d</link>
<description/>
<size>156311</size>
</item><item>
<title>Learning to Detect and Classify Malicious Executables in the Wild (Special Topic on Machine Learning for Computer Security)</title>
<category>Paper</category>
<infohash>ab04cea5def5dc64eb39a5ace8f958aeff955c13</infohash>
<guid>https://academictorrents.com/details/ab04cea5def5dc64eb39a5ace8f958aeff955c13</guid>
<link>https://academictorrents.com/details/ab04cea5def5dc64eb39a5ace8f958aeff955c13</link>
<description/>
<size>206729</size>
</item><item>
<title>Incorporating Functional Knowledge in Neural Networks</title>
<category>Paper</category>
<infohash>f52efac637894c976728119495b979790e2b894e</infohash>
<guid>https://academictorrents.com/details/f52efac637894c976728119495b979790e2b894e</guid>
<link>https://academictorrents.com/details/f52efac637894c976728119495b979790e2b894e</link>
<description/>
<size>181142</size>
</item><item>
<title>A Robust Procedure For Gaussian Graphical Model Search From Microarray Data With p Larger Than n</title>
<category>Paper</category>
<infohash>c782c5a3a2161449b7b4c7aa9cb57dab69887027</infohash>
<guid>https://academictorrents.com/details/c782c5a3a2161449b7b4c7aa9cb57dab69887027</guid>
<link>https://academictorrents.com/details/c782c5a3a2161449b7b4c7aa9cb57dab69887027</link>
<description/>
<size>685163</size>
</item><item>
<title>A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis</title>
<category>Paper</category>
<infohash>2b2586a299e191fa8329d6ca35f1892d174071f4</infohash>
<guid>https://academictorrents.com/details/2b2586a299e191fa8329d6ca35f1892d174071f4</guid>
<link>https://academictorrents.com/details/2b2586a299e191fa8329d6ca35f1892d174071f4</link>
<description/>
<size>399654</size>
</item><item>
<title>Evolutionary Model Type Selection for Global Surrogate Modeling</title>
<category>Paper</category>
<infohash>2dc016a42d36912d559e9896404a8afd43ccf47e</infohash>
<guid>https://academictorrents.com/details/2dc016a42d36912d559e9896404a8afd43ccf47e</guid>
<link>https://academictorrents.com/details/2dc016a42d36912d559e9896404a8afd43ccf47e</link>
<description/>
<size>1277094</size>
</item><item>
<title>Estimating Functions for Blind Separation When Sources Have Variance Dependencies</title>
<category>Paper</category>
<infohash>00fcc670016f036c2ff9f3df3c0d3437356e0588</infohash>
<guid>https://academictorrents.com/details/00fcc670016f036c2ff9f3df3c0d3437356e0588</guid>
<link>https://academictorrents.com/details/00fcc670016f036c2ff9f3df3c0d3437356e0588</link>
<description/>
<size>658280</size>
</item><item>
<title>From External to Internal Regret (Special Topic on the Conference on Learning Theory 2005)</title>
<category>Paper</category>
<infohash>9682919797e53ce79c185a851587e5a575574dff</infohash>
<guid>https://academictorrents.com/details/9682919797e53ce79c185a851587e5a575574dff</guid>
<link>https://academictorrents.com/details/9682919797e53ce79c185a851587e5a575574dff</link>
<description/>
<size>163168</size>
</item><item>
<title>Estimation of Gradients and Coordinate Covariation in Classification</title>
<category>Paper</category>
<infohash>aca9aaca4f8f1084ea4f140ffdb2f1c44b05bcdc</infohash>
<guid>https://academictorrents.com/details/aca9aaca4f8f1084ea4f140ffdb2f1c44b05bcdc</guid>
<link>https://academictorrents.com/details/aca9aaca4f8f1084ea4f140ffdb2f1c44b05bcdc</link>
<description/>
<size>317438</size>
</item><item>
<title>On Inferring Application Protocol Behaviors in Encrypted Network Traffic (Special Topic on Machine Learning for Computer Security)</title>
<category>Paper</category>
<infohash>fd60073fa47e2fe363b49d64545720cac67f2797</infohash>
<guid>https://academictorrents.com/details/fd60073fa47e2fe363b49d64545720cac67f2797</guid>
<link>https://academictorrents.com/details/fd60073fa47e2fe363b49d64545720cac67f2797</link>
<description/>
<size>275123</size>
</item><item>
<title>New Horn Revision Algorithms</title>
<category>Paper</category>
<infohash>38e420740a5236e66eda4757734adb1380289565</infohash>
<guid>https://academictorrents.com/details/38e420740a5236e66eda4757734adb1380289565</guid>
<link>https://academictorrents.com/details/38e420740a5236e66eda4757734adb1380289565</link>
<description/>
<size>149384</size>
</item><item>
<title>Forest Density Estimation</title>
<category>Paper</category>
<infohash>1d44d4af516810eca11fdeb7430b1cafc6971c59</infohash>
<guid>https://academictorrents.com/details/1d44d4af516810eca11fdeb7430b1cafc6971c59</guid>
<link>https://academictorrents.com/details/1d44d4af516810eca11fdeb7430b1cafc6971c59</link>
<description/>
<size>809726</size>
</item><item>
<title>Handling Missing Values when Applying Classification Models</title>
<category>Paper</category>
<infohash>2c4f83e3f6f4e0468ac16780caedd93c5799e263</infohash>
<guid>https://academictorrents.com/details/2c4f83e3f6f4e0468ac16780caedd93c5799e263</guid>
<link>https://academictorrents.com/details/2c4f83e3f6f4e0468ac16780caedd93c5799e263</link>
<description/>
<size>389647</size>
</item><item>
<title>Shallow Parsing using Specialized HMMs</title>
<category>Paper</category>
<infohash>2d7636b11b729d5c096b0195125f45d396d17314</infohash>
<guid>https://academictorrents.com/details/2d7636b11b729d5c096b0195125f45d396d17314</guid>
<link>https://academictorrents.com/details/2d7636b11b729d5c096b0195125f45d396d17314</link>
<description/>
<size>308350</size>
</item><item>
<title>Robust Gaussian Process Regression with a Student-t Likelihood</title>
<category>Paper</category>
<infohash>e49fb903a9419d945066f323bd1a6785eeb8f55c</infohash>
<guid>https://academictorrents.com/details/e49fb903a9419d945066f323bd1a6785eeb8f55c</guid>
<link>https://academictorrents.com/details/e49fb903a9419d945066f323bd1a6785eeb8f55c</link>
<description/>
<size>759216</size>
</item><item>
<title>Bayesian Network Learning with Parameter Constraints (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>56d216a606fa1fcf0e1338b7f6252889e7ec351a</infohash>
<guid>https://academictorrents.com/details/56d216a606fa1fcf0e1338b7f6252889e7ec351a</guid>
<link>https://academictorrents.com/details/56d216a606fa1fcf0e1338b7f6252889e7ec351a</link>
<description/>
<size>259545</size>
</item><item>
<title>SparseRobust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory</title>
<category>Paper</category>
<infohash>4f11055a66aee0bc992534acd0a0bfad33a284d0</infohash>
<guid>https://academictorrents.com/details/4f11055a66aee0bc992534acd0a0bfad33a284d0</guid>
<link>https://academictorrents.com/details/4f11055a66aee0bc992534acd0a0bfad33a284d0</link>
<description/>
<size>507877</size>
</item><item>
<title>A Bayesian Approximation Method for Online Ranking</title>
<category>Paper</category>
<infohash>61f3fcb50a5e10fd40d461042e9bc2d33e263e9f</infohash>
<guid>https://academictorrents.com/details/61f3fcb50a5e10fd40d461042e9bc2d33e263e9f</guid>
<link>https://academictorrents.com/details/61f3fcb50a5e10fd40d461042e9bc2d33e263e9f</link>
<description/>
<size>236160</size>
</item><item>
<title>General Polynomial Time Decomposition Algorithms (Special Topic on the Conference on Learning Theory 2005)</title>
<category>Paper</category>
<infohash>85cad139e5d7933d2a0a9609042d7f65d5234910</infohash>
<guid>https://academictorrents.com/details/85cad139e5d7933d2a0a9609042d7f65d5234910</guid>
<link>https://academictorrents.com/details/85cad139e5d7933d2a0a9609042d7f65d5234910</link>
<description/>
<size>154044</size>
</item><item>
<title>Introduction to the Special Issue on Kernel Methods (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>d16ac871220cb7c33415ffaa77bdd94309cb7b3c</infohash>
<guid>https://academictorrents.com/details/d16ac871220cb7c33415ffaa77bdd94309cb7b3c</guid>
<link>https://academictorrents.com/details/d16ac871220cb7c33415ffaa77bdd94309cb7b3c</link>
<description/>
<size>81052</size>
</item><item>
<title>Exploitation of Machine Learning Techniques in Modelling Phrase Movements for Machine Translation</title>
<category>Paper</category>
<infohash>1376e352dc93e1e2066c34531e9f5f39ad442bca</infohash>
<guid>https://academictorrents.com/details/1376e352dc93e1e2066c34531e9f5f39ad442bca</guid>
<link>https://academictorrents.com/details/1376e352dc93e1e2066c34531e9f5f39ad442bca</link>
<description/>
<size>1449207</size>
</item><item>
<title>Large Scale Multiple Kernel Learning (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>eb2f99fb247f0f374e63f0efd4a6874466658f31</infohash>
<guid>https://academictorrents.com/details/eb2f99fb247f0f374e63f0efd4a6874466658f31</guid>
<link>https://academictorrents.com/details/eb2f99fb247f0f374e63f0efd4a6874466658f31</link>
<description/>
<size>937229</size>
</item><item>
<title>Theoretical Analysis of Bayesian Matrix Factorization</title>
<category>Paper</category>
<infohash>15fe36f7944a8d40c7e3dbdb9c3cb2c94ffabdc2</infohash>
<guid>https://academictorrents.com/details/15fe36f7944a8d40c7e3dbdb9c3cb2c94ffabdc2</guid>
<link>https://academictorrents.com/details/15fe36f7944a8d40c7e3dbdb9c3cb2c94ffabdc2</link>
<description/>
<size>581483</size>
</item><item>
<title>Active Learning with Feedback on Features and Instances</title>
<category>Paper</category>
<infohash>0e91d239c92daf6cefe25e8314fa13e48baa1105</infohash>
<guid>https://academictorrents.com/details/0e91d239c92daf6cefe25e8314fa13e48baa1105</guid>
<link>https://academictorrents.com/details/0e91d239c92daf6cefe25e8314fa13e48baa1105</link>
<description/>
<size>386122</size>
</item><item>
<title>Evolutionary Function Approximation for Reinforcement Learning</title>
<category>Paper</category>
<infohash>a7bc5f6bc983877b6fdfa38167d15440e321b9f2</infohash>
<guid>https://academictorrents.com/details/a7bc5f6bc983877b6fdfa38167d15440e321b9f2</guid>
<link>https://academictorrents.com/details/a7bc5f6bc983877b6fdfa38167d15440e321b9f2</link>
<description/>
<size>1552620</size>
</item><item>
<title>Dimension Reduction in Text Classification with Support Vector Machines</title>
<category>Paper</category>
<infohash>2b73ffdc6583445eb35d6555d487444f15fb789d</infohash>
<guid>https://academictorrents.com/details/2b73ffdc6583445eb35d6555d487444f15fb789d</guid>
<link>https://academictorrents.com/details/2b73ffdc6583445eb35d6555d487444f15fb789d</link>
<description/>
<size>107393</size>
</item><item>
<title>Sparseness vs Estimating Conditional Probabilities: Some Asymptotic Results</title>
<category>Paper</category>
<infohash>8e34e6991e7bb8c97b82ab309623e5602a8f0610</infohash>
<guid>https://academictorrents.com/details/8e34e6991e7bb8c97b82ab309623e5602a8f0610</guid>
<link>https://academictorrents.com/details/8e34e6991e7bb8c97b82ab309623e5602a8f0610</link>
<description/>
<size>146637</size>
</item><item>
<title>Kernels on Prolog Proof Trees: Statistical Learning in the ILP Setting (Special Topic on Inductive Programming)</title>
<category>Paper</category>
<infohash>d69b2a90865890327ccb1386080218970d16e5d4</infohash>
<guid>https://academictorrents.com/details/d69b2a90865890327ccb1386080218970d16e5d4</guid>
<link>https://academictorrents.com/details/d69b2a90865890327ccb1386080218970d16e5d4</link>
<description/>
<size>552761</size>
</item><item>
<title>PAC-Bayes Risk Bounds for Stochastic Averages and Majority Votes of Sample-Compressed Classifiers</title>
<category>Paper</category>
<infohash>48770b460f934ebe6ccc2a05d9e913fa62ccd5a8</infohash>
<guid>https://academictorrents.com/details/48770b460f934ebe6ccc2a05d9e913fa62ccd5a8</guid>
<link>https://academictorrents.com/details/48770b460f934ebe6ccc2a05d9e913fa62ccd5a8</link>
<description/>
<size>225562</size>
</item><item>
<title>New Algorithms for Efficient High-Dimensional Nonparametric Classification</title>
<category>Paper</category>
<infohash>1311aa4cb92a3da7928a5dea4c43de7a6840bb0d</infohash>
<guid>https://academictorrents.com/details/1311aa4cb92a3da7928a5dea4c43de7a6840bb0d</guid>
<link>https://academictorrents.com/details/1311aa4cb92a3da7928a5dea4c43de7a6840bb0d</link>
<description/>
<size>183679</size>
</item><item>
<title>Anytime Learning of Decision Trees</title>
<category>Paper</category>
<infohash>3f8d53cdaca8ec8852963bef9f37d60d7309860e</infohash>
<guid>https://academictorrents.com/details/3f8d53cdaca8ec8852963bef9f37d60d7309860e</guid>
<link>https://academictorrents.com/details/3f8d53cdaca8ec8852963bef9f37d60d7309860e</link>
<description/>
<size>364505</size>
</item><item>
<title>DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model</title>
<category>Paper</category>
<infohash>65bc332a9ff22064b7d377174051e15955826649</infohash>
<guid>https://academictorrents.com/details/65bc332a9ff22064b7d377174051e15955826649</guid>
<link>https://academictorrents.com/details/65bc332a9ff22064b7d377174051e15955826649</link>
<description/>
<size>934533</size>
</item><item>
<title>Hyper-Sparse Optimal Aggregation</title>
<category>Paper</category>
<infohash>9fdf62245fad8cc91904e1d6db646b3cd2af6b8e</infohash>
<guid>https://academictorrents.com/details/9fdf62245fad8cc91904e1d6db646b3cd2af6b8e</guid>
<link>https://academictorrents.com/details/9fdf62245fad8cc91904e1d6db646b3cd2af6b8e</link>
<description/>
<size>900657</size>
</item><item>
<title>Efficient SVM Training Using Low-Rank Kernel Representations (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>f7e3faa34b142adb79f3c869e33a7cbc2fc3cc19</infohash>
<guid>https://academictorrents.com/details/f7e3faa34b142adb79f3c869e33a7cbc2fc3cc19</guid>
<link>https://academictorrents.com/details/f7e3faa34b142adb79f3c869e33a7cbc2fc3cc19</link>
<description/>
<size>463052</size>
</item><item>
<title>Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation</title>
<category>Paper</category>
<infohash>7bcab37cc9727bef5eac69a8b43826f99d7493c2</infohash>
<guid>https://academictorrents.com/details/7bcab37cc9727bef5eac69a8b43826f99d7493c2</guid>
<link>https://academictorrents.com/details/7bcab37cc9727bef5eac69a8b43826f99d7493c2</link>
<description/>
<size>852468</size>
</item><item>
<title>Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation</title>
<category>Paper</category>
<infohash>0fc69cb913852ee93543f0fe741755e03ba081b4</infohash>
<guid>https://academictorrents.com/details/0fc69cb913852ee93543f0fe741755e03ba081b4</guid>
<link>https://academictorrents.com/details/0fc69cb913852ee93543f0fe741755e03ba081b4</link>
<description/>
<size>266307</size>
</item><item>
<title>Separating Models of Learning from Correlated and Uncorrelated Data (Special Topic on the Conference on Learning Theory 2005)</title>
<category>Paper</category>
<infohash>b591e6306d970a3db3a2d7217a4d2ddf38516707</infohash>
<guid>https://academictorrents.com/details/b591e6306d970a3db3a2d7217a4d2ddf38516707</guid>
<link>https://academictorrents.com/details/b591e6306d970a3db3a2d7217a4d2ddf38516707</link>
<description/>
<size>132695</size>
</item><item>
<title>Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters (Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>08b92c86b378c859ff1a1cae59db757f3ac28f9f</infohash>
<guid>https://academictorrents.com/details/08b92c86b378c859ff1a1cae59db757f3ac28f9f</guid>
<link>https://academictorrents.com/details/08b92c86b378c859ff1a1cae59db757f3ac28f9f</link>
<description/>
<size>153711</size>
</item><item>
<title>Nonlinear Estimators and Tail Bounds for Dimension Reduction in l1 Using Cauchy Random Projections</title>
<category>Paper</category>
<infohash>b6a11a170af5a00d6a1ccb210305a0d1cec566f2</infohash>
<guid>https://academictorrents.com/details/b6a11a170af5a00d6a1ccb210305a0d1cec566f2</guid>
<link>https://academictorrents.com/details/b6a11a170af5a00d6a1ccb210305a0d1cec566f2</link>
<description/>
<size>337826</size>
</item><item>
<title>A Simulation-Based Algorithm for Ergodic Control of Markov Chains Conditioned on Rare Events</title>
<category>Paper</category>
<infohash>e40d76248365e1e8d49d052f1b0c2eb3aa3d7a6e</infohash>
<guid>https://academictorrents.com/details/e40d76248365e1e8d49d052f1b0c2eb3aa3d7a6e</guid>
<link>https://academictorrents.com/details/e40d76248365e1e8d49d052f1b0c2eb3aa3d7a6e</link>
<description/>
<size>219604</size>
</item><item>
<title>Group Lasso Estimation of High-dimensional Covariance Matrices</title>
<category>Paper</category>
<infohash>b94b1d2d051795e089f6a3e99ae13f77ec7d2719</infohash>
<guid>https://academictorrents.com/details/b94b1d2d051795e089f6a3e99ae13f77ec7d2719</guid>
<link>https://academictorrents.com/details/b94b1d2d051795e089f6a3e99ae13f77ec7d2719</link>
<description/>
<size>354443</size>
</item><item>
<title>A Linear Non-Gaussian Acyclic Model for Causal Discovery</title>
<category>Paper</category>
<infohash>83c385e427011f7e57c43b389adb63247ac98490</infohash>
<guid>https://academictorrents.com/details/83c385e427011f7e57c43b389adb63247ac98490</guid>
<link>https://academictorrents.com/details/83c385e427011f7e57c43b389adb63247ac98490</link>
<description/>
<size>417284</size>
</item><item>
<title>Core Vector Machines: Fast SVM Training on Very Large Data Sets</title>
<category>Paper</category>
<infohash>5f89897c06b7151321830600009ef153f6b45941</infohash>
<guid>https://academictorrents.com/details/5f89897c06b7151321830600009ef153f6b45941</guid>
<link>https://academictorrents.com/details/5f89897c06b7151321830600009ef153f6b45941</link>
<description/>
<size>417311</size>
</item><item>
<title>Refinable Kernels</title>
<category>Paper</category>
<infohash>27a1cc2a72cf88a10772e3e7ff8d03c3e4bb4a0d</infohash>
<guid>https://academictorrents.com/details/27a1cc2a72cf88a10772e3e7ff8d03c3e4bb4a0d</guid>
<link>https://academictorrents.com/details/27a1cc2a72cf88a10772e3e7ff8d03c3e4bb4a0d</link>
<description/>
<size>261780</size>
</item><item>
<title>Linear Programs for Hypotheses Selection in Probabilistic Inference Models (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>981132f46a26b773c8c1a9396372ce6546861867</infohash>
<guid>https://academictorrents.com/details/981132f46a26b773c8c1a9396372ce6546861867</guid>
<link>https://academictorrents.com/details/981132f46a26b773c8c1a9396372ce6546861867</link>
<description/>
<size>127427</size>
</item><item>
<title>An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression</title>
<category>Paper</category>
<infohash>2deb1384378d8b6b777db438b4f57ff7866274a5</infohash>
<guid>https://academictorrents.com/details/2deb1384378d8b6b777db438b4f57ff7866274a5</guid>
<link>https://academictorrents.com/details/2deb1384378d8b6b777db438b4f57ff7866274a5</link>
<description/>
<size>910983</size>
</item><item>
<title>Universality, Characteristic Kernels and RKHS Embedding of Measures</title>
<category>Paper</category>
<infohash>517b5ac8eca29f7553968a5a93f8e500f8f6243b</infohash>
<guid>https://academictorrents.com/details/517b5ac8eca29f7553968a5a93f8e500f8f6243b</guid>
<link>https://academictorrents.com/details/517b5ac8eca29f7553968a5a93f8e500f8f6243b</link>
<description/>
<size>369711</size>
</item><item>
<title>Maximum Entropy Density Estimation with Generalized Regularization and an Application to Species Distribution Modeling</title>
<category>Paper</category>
<infohash>ee546a96c061eeaaac4aadb9f94cae166ad892b8</infohash>
<guid>https://academictorrents.com/details/ee546a96c061eeaaac4aadb9f94cae166ad892b8</guid>
<link>https://academictorrents.com/details/ee546a96c061eeaaac4aadb9f94cae166ad892b8</link>
<description/>
<size>432510</size>
</item><item>
<title>Minimum Description Length Penalization for Group and Multi-Task Sparse Learning</title>
<category>Paper</category>
<infohash>95477d6818dba2c532684af7bd9a6d645b0c14d5</infohash>
<guid>https://academictorrents.com/details/95477d6818dba2c532684af7bd9a6d645b0c14d5</guid>
<link>https://academictorrents.com/details/95477d6818dba2c532684af7bd9a6d645b0c14d5</link>
<description/>
<size>317951</size>
</item><item>
<title>Bayesian Generalized Kernel Mixed Models</title>
<category>Paper</category>
<infohash>5aa43bec0cb3510349f9314a9cf9f7a57fa9678f</infohash>
<guid>https://academictorrents.com/details/5aa43bec0cb3510349f9314a9cf9f7a57fa9678f</guid>
<link>https://academictorrents.com/details/5aa43bec0cb3510349f9314a9cf9f7a57fa9678f</link>
<description/>
<size>340760</size>
</item><item>
<title>The Interplay of Optimization and Machine Learning Research (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>f59218767a5dade9760abec4d64dd1ef71bc7581</infohash>
<guid>https://academictorrents.com/details/f59218767a5dade9760abec4d64dd1ef71bc7581</guid>
<link>https://academictorrents.com/details/f59218767a5dade9760abec4d64dd1ef71bc7581</link>
<description/>
<size>118184</size>
</item><item>
<title>Margin Trees for High-dimensional Classification</title>
<category>Paper</category>
<infohash>26038553c5011297c6784264b8895edc840a130a</infohash>
<guid>https://academictorrents.com/details/26038553c5011297c6784264b8895edc840a130a</guid>
<link>https://academictorrents.com/details/26038553c5011297c6784264b8895edc840a130a</link>
<description/>
<size>102520</size>
</item><item>
<title>Bayesian Quadratic Discriminant Analysis</title>
<category>Paper</category>
<infohash>1a41a2ea96101eef06364ea8dc31cfff7ee6abef</infohash>
<guid>https://academictorrents.com/details/1a41a2ea96101eef06364ea8dc31cfff7ee6abef</guid>
<link>https://academictorrents.com/details/1a41a2ea96101eef06364ea8dc31cfff7ee6abef</link>
<description/>
<size>196901</size>
</item><item>
<title>Learning Equivariant Functions with Matrix Valued Kernels</title>
<category>Paper</category>
<infohash>46a545c2336e13e237de06acbc584afd7a1cded7</infohash>
<guid>https://academictorrents.com/details/46a545c2336e13e237de06acbc584afd7a1cded7</guid>
<link>https://academictorrents.com/details/46a545c2336e13e237de06acbc584afd7a1cded7</link>
<description/>
<size>495753</size>
</item><item>
<title>Dirichlet Process Mixtures of Generalized Linear Models</title>
<category>Paper</category>
<infohash>3661cf7a069b9defb6cf5dd385e4ba5c03c43052</infohash>
<guid>https://academictorrents.com/details/3661cf7a069b9defb6cf5dd385e4ba5c03c43052</guid>
<link>https://academictorrents.com/details/3661cf7a069b9defb6cf5dd385e4ba5c03c43052</link>
<description/>
<size>2926718</size>
</item><item>
<title>Dynamics and Generalization Ability of LVQ Algorithms</title>
<category>Paper</category>
<infohash>d70bf26ff67cac69225ba1aad75beb599dcb47bb</infohash>
<guid>https://academictorrents.com/details/d70bf26ff67cac69225ba1aad75beb599dcb47bb</guid>
<link>https://academictorrents.com/details/d70bf26ff67cac69225ba1aad75beb599dcb47bb</link>
<description/>
<size>608475</size>
</item><item>
<title>X-Armed Bandits</title>
<category>Paper</category>
<infohash>82a415bed4fd188b34ea7b50c2b92c5659428a50</infohash>
<guid>https://academictorrents.com/details/82a415bed4fd188b34ea7b50c2b92c5659428a50</guid>
<link>https://academictorrents.com/details/82a415bed4fd188b34ea7b50c2b92c5659428a50</link>
<description/>
<size>835486</size>
</item><item>
<title>A Family of Simple Non-Parametric Kernel Learning Algorithms</title>
<category>Paper</category>
<infohash>25e2c246cb979b5177f6f503e7b9936e6502e9e2</infohash>
<guid>https://academictorrents.com/details/25e2c246cb979b5177f6f503e7b9936e6502e9e2</guid>
<link>https://academictorrents.com/details/25e2c246cb979b5177f6f503e7b9936e6502e9e2</link>
<description/>
<size>620863</size>
</item><item>
<title>On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>e5e4db66ae2b6757f9a4ddca5ea6793db4c9b267</infohash>
<guid>https://academictorrents.com/details/e5e4db66ae2b6757f9a4ddca5ea6793db4c9b267</guid>
<link>https://academictorrents.com/details/e5e4db66ae2b6757f9a4ddca5ea6793db4c9b267</link>
<description/>
<size>494387</size>
</item><item>
<title>Learning with Structured Sparsity</title>
<category>Paper</category>
<infohash>47a3b370d978d2db44d8d4b16198910f8008eece</infohash>
<guid>https://academictorrents.com/details/47a3b370d978d2db44d8d4b16198910f8008eece</guid>
<link>https://academictorrents.com/details/47a3b370d978d2db44d8d4b16198910f8008eece</link>
<description/>
<size>518696</size>
</item><item>
<title>Learning a Hidden Hypergraph</title>
<category>Paper</category>
<infohash>0072e334eb0341d1a3aa1228d16142da5da7729a</infohash>
<guid>https://academictorrents.com/details/0072e334eb0341d1a3aa1228d16142da5da7729a</guid>
<link>https://academictorrents.com/details/0072e334eb0341d1a3aa1228d16142da5da7729a</link>
<description/>
<size>178935</size>
</item><item>
<title>Supervised Descriptive Rule Discovery: A Unifying Survey of Contrast Set, Emerging Pattern and Subgroup Mining</title>
<category>Paper</category>
<infohash>2cf0863d0ecaeffdf82e75ae8b572cc0da2ee4f6</infohash>
<guid>https://academictorrents.com/details/2cf0863d0ecaeffdf82e75ae8b572cc0da2ee4f6</guid>
<link>https://academictorrents.com/details/2cf0863d0ecaeffdf82e75ae8b572cc0da2ee4f6</link>
<description/>
<size>278964</size>
</item><item>
<title>Structured Variable Selection with Sparsity-Inducing Norms</title>
<category>Paper</category>
<infohash>a9da48e21b8fb1dbf8f7736d5f04a4e688c9b3f7</infohash>
<guid>https://academictorrents.com/details/a9da48e21b8fb1dbf8f7736d5f04a4e688c9b3f7</guid>
<link>https://academictorrents.com/details/a9da48e21b8fb1dbf8f7736d5f04a4e688c9b3f7</link>
<description/>
<size>446849</size>
</item><item>
<title>Cumulative Distribution Networks and the Derivative-sum-product Algorithm: Models and Inference for Cumulative Distribution Functions on Graphs</title>
<category>Paper</category>
<infohash>a14356df526b633b5280cecf9f65e4b6141b609c</infohash>
<guid>https://academictorrents.com/details/a14356df526b633b5280cecf9f65e4b6141b609c</guid>
<link>https://academictorrents.com/details/a14356df526b633b5280cecf9f65e4b6141b609c</link>
<description/>
<size>2025724</size>
</item><item>
<title>Optimising Kernel Parameters and Regularisation Coefficients for Non-linear Discriminant Analysis</title>
<category>Paper</category>
<infohash>34e81945b75f04fc9d761ff03107482eb6fdbda6</infohash>
<guid>https://academictorrents.com/details/34e81945b75f04fc9d761ff03107482eb6fdbda6</guid>
<link>https://academictorrents.com/details/34e81945b75f04fc9d761ff03107482eb6fdbda6</link>
<description/>
<size>487931</size>
</item><item>
<title>Large Scale Transductive SVMs</title>
<category>Paper</category>
<infohash>fe5196636bb4f1f391220fbe64d73393c8ef1481</infohash>
<guid>https://academictorrents.com/details/fe5196636bb4f1f391220fbe64d73393c8ef1481</guid>
<link>https://academictorrents.com/details/fe5196636bb4f1f391220fbe64d73393c8ef1481</link>
<description/>
<size>220629</size>
</item><item>
<title>Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples</title>
<category>Paper</category>
<infohash>f3a3b1a97bd8fdb762685f05318b822e295fa059</infohash>
<guid>https://academictorrents.com/details/f3a3b1a97bd8fdb762685f05318b822e295fa059</guid>
<link>https://academictorrents.com/details/f3a3b1a97bd8fdb762685f05318b822e295fa059</link>
<description/>
<size>416149</size>
</item><item>
<title>On the Learnability of Shuffle Ideals</title>
<category>Paper</category>
<infohash>1994966b15a813e041fd3b12333cbb91fd51ee86</infohash>
<guid>https://academictorrents.com/details/1994966b15a813e041fd3b12333cbb91fd51ee86</guid>
<link>https://academictorrents.com/details/1994966b15a813e041fd3b12333cbb91fd51ee86</link>
<description/>
<size>181534</size>
</item><item>
<title>Value Regularization and Fenchel Duality</title>
<category>Paper</category>
<infohash>465d5a852f238ee8a686c9cbaab84f447477c45d</infohash>
<guid>https://academictorrents.com/details/465d5a852f238ee8a686c9cbaab84f447477c45d</guid>
<link>https://academictorrents.com/details/465d5a852f238ee8a686c9cbaab84f447477c45d</link>
<description/>
<size>299190</size>
</item><item>
<title>Learning Minimum Volume Sets</title>
<category>Paper</category>
<infohash>c672b8774ca121ea3393cdbc25be2d99d17cf79f</infohash>
<guid>https://academictorrents.com/details/c672b8774ca121ea3393cdbc25be2d99d17cf79f</guid>
<link>https://academictorrents.com/details/c672b8774ca121ea3393cdbc25be2d99d17cf79f</link>
<description/>
<size>917285</size>
</item><item>
<title>Gini Support Vector Machine: Quadratic Entropy Based Robust Multi-Class Probability Regression</title>
<category>Paper</category>
<infohash>edf90fe1660c45f75c07a34842d2c6b2361e13e7</infohash>
<guid>https://academictorrents.com/details/edf90fe1660c45f75c07a34842d2c6b2361e13e7</guid>
<link>https://academictorrents.com/details/edf90fe1660c45f75c07a34842d2c6b2361e13e7</link>
<description/>
<size>817863</size>
</item><item>
<title>CARP: Software for Fishing Out Good Clustering Algorithms</title>
<category>Paper</category>
<infohash>5e484b1c2b27bcb2666f500789826af07d052fe5</infohash>
<guid>https://academictorrents.com/details/5e484b1c2b27bcb2666f500789826af07d052fe5</guid>
<link>https://academictorrents.com/details/5e484b1c2b27bcb2666f500789826af07d052fe5</link>
<description/>
<size>584436</size>
</item><item>
<title>Discriminative Learning of Bayesian Networks via Factorized Conditional Log-Likelihood</title>
<category>Paper</category>
<infohash>43174c736752d532c6beef2db3a51f160f3e20d4</infohash>
<guid>https://academictorrents.com/details/43174c736752d532c6beef2db3a51f160f3e20d4</guid>
<link>https://academictorrents.com/details/43174c736752d532c6beef2db3a51f160f3e20d4</link>
<description/>
<size>447140</size>
</item><item>
<title>Learning Multi-modal Similarity</title>
<category>Paper</category>
<infohash>34ca78c54a4f49615dd060a9d31fffaf4646ba89</infohash>
<guid>https://academictorrents.com/details/34ca78c54a4f49615dd060a9d31fffaf4646ba89</guid>
<link>https://academictorrents.com/details/34ca78c54a4f49615dd060a9d31fffaf4646ba89</link>
<description/>
<size>468434</size>
</item><item>
<title>Parallel Algorithm for Learning Optimal Bayesian Network Structure</title>
<category>Paper</category>
<infohash>cc032abd599138c3215423c5a746c4f24b70714c</infohash>
<guid>https://academictorrents.com/details/cc032abd599138c3215423c5a746c4f24b70714c</guid>
<link>https://academictorrents.com/details/cc032abd599138c3215423c5a746c4f24b70714c</link>
<description/>
<size>394237</size>
</item><item>
<title>Scikit-learn: Machine Learning in Python</title>
<category>Paper</category>
<infohash>5ba4939a00a9b21629a0ad7d376898b768d997a3</infohash>
<guid>https://academictorrents.com/details/5ba4939a00a9b21629a0ad7d376898b768d997a3</guid>
<link>https://academictorrents.com/details/5ba4939a00a9b21629a0ad7d376898b768d997a3</link>
<description/>
<size>42310</size>
</item><item>
<title>Stagewise Lasso</title>
<category>Paper</category>
<infohash>defc2d193d959e9819bddb6d9ca11eaa70802569</infohash>
<guid>https://academictorrents.com/details/defc2d193d959e9819bddb6d9ca11eaa70802569</guid>
<link>https://academictorrents.com/details/defc2d193d959e9819bddb6d9ca11eaa70802569</link>
<description/>
<size>611527</size>
</item><item>
<title>Double Updating Online Learning</title>
<category>Paper</category>
<infohash>0bc529b79f14c92c8a4b273b3197601aafb4c5d0</infohash>
<guid>https://academictorrents.com/details/0bc529b79f14c92c8a4b273b3197601aafb4c5d0</guid>
<link>https://academictorrents.com/details/0bc529b79f14c92c8a4b273b3197601aafb4c5d0</link>
<description/>
<size>302619</size>
</item><item>
<title>Learning Nondeterministic Classifiers</title>
<category>Paper</category>
<infohash>5d1dbaf04b0e5b1b254f4167267f3b345a8e6d7e</infohash>
<guid>https://academictorrents.com/details/5d1dbaf04b0e5b1b254f4167267f3b345a8e6d7e</guid>
<link>https://academictorrents.com/details/5d1dbaf04b0e5b1b254f4167267f3b345a8e6d7e</link>
<description/>
<size>422558</size>
</item><item>
<title>Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization</title>
<category>Paper</category>
<infohash>ba016e245fb44ec204437338eecd1cf4b94c65a8</infohash>
<guid>https://academictorrents.com/details/ba016e245fb44ec204437338eecd1cf4b94c65a8</guid>
<link>https://academictorrents.com/details/ba016e245fb44ec204437338eecd1cf4b94c65a8</link>
<description/>
<size>711510</size>
</item><item>
<title>Sparse Linear Identifiable Multivariate Modeling</title>
<category>Paper</category>
<infohash>d2a6b778455b4284bfe0a44ea794ce35cf4573f4</infohash>
<guid>https://academictorrents.com/details/d2a6b778455b4284bfe0a44ea794ce35cf4573f4</guid>
<link>https://academictorrents.com/details/d2a6b778455b4284bfe0a44ea794ce35cf4573f4</link>
<description/>
<size>533988</size>
</item><item>
<title>Distance Metric Learning for Large Margin Nearest Neighbor Classification</title>
<category>Paper</category>
<infohash>530e4635cf63a20a58a2e032707730ca226a61f0</infohash>
<guid>https://academictorrents.com/details/530e4635cf63a20a58a2e032707730ca226a61f0</guid>
<link>https://academictorrents.com/details/530e4635cf63a20a58a2e032707730ca226a61f0</link>
<description/>
<size>1980873</size>
</item><item>
<title>Step Size Adaptation in Reproducing Kernel Hilbert Space</title>
<category>Paper</category>
<infohash>5640eae7ca2e270385c383c9c2e1b7969d95c246</infohash>
<guid>https://academictorrents.com/details/5640eae7ca2e270385c383c9c2e1b7969d95c246</guid>
<link>https://academictorrents.com/details/5640eae7ca2e270385c383c9c2e1b7969d95c246</link>
<description/>
<size>450312</size>
</item><item>
<title>Anechoic Blind Source Separation Using Wigner Marginals</title>
<category>Paper</category>
<infohash>3f009f0f96c7fc816eaff965df80a2d33d8237c9</infohash>
<guid>https://academictorrents.com/details/3f009f0f96c7fc816eaff965df80a2d33d8237c9</guid>
<link>https://academictorrents.com/details/3f009f0f96c7fc816eaff965df80a2d33d8237c9</link>
<description/>
<size>773750</size>
</item><item>
<title>lp-Norm Multiple Kernel Learning</title>
<category>Paper</category>
<infohash>29c6ca8dd86b10b4d0870f86dac6e78eebe268a1</infohash>
<guid>https://academictorrents.com/details/29c6ca8dd86b10b4d0870f86dac6e78eebe268a1</guid>
<link>https://academictorrents.com/details/29c6ca8dd86b10b4d0870f86dac6e78eebe268a1</link>
<description/>
<size>434395</size>
</item><item>
<title>Learning Horn Expressions with LOGAN-H</title>
<category>Paper</category>
<infohash>f4e3b96b5b9e4e4974f218341b30f4d7f59afa8c</infohash>
<guid>https://academictorrents.com/details/f4e3b96b5b9e4e4974f218341b30f4d7f59afa8c</guid>
<link>https://academictorrents.com/details/f4e3b96b5b9e4e4974f218341b30f4d7f59afa8c</link>
<description/>
<size>814712</size>
</item><item>
<title>Posterior Sparsity in Unsupervised Dependency Parsing</title>
<category>Paper</category>
<infohash>f933c7d421ba53868b3a03bd73728294abdb70d5</infohash>
<guid>https://academictorrents.com/details/f933c7d421ba53868b3a03bd73728294abdb70d5</guid>
<link>https://academictorrents.com/details/f933c7d421ba53868b3a03bd73728294abdb70d5</link>
<description/>
<size>383819</size>
</item><item>
<title>Introduction to the Special Topic on Grammar Induction, Representation of Language and Language Learning</title>
<category>Paper</category>
<infohash>71e1c9692c556e252aa7e2f4715c419ee447039b</infohash>
<guid>https://academictorrents.com/details/71e1c9692c556e252aa7e2f4715c419ee447039b</guid>
<link>https://academictorrents.com/details/71e1c9692c556e252aa7e2f4715c419ee447039b</link>
<description/>
<size>25980</size>
</item><item>
<title>Nonlinear Models Using Dirichlet Process Mixtures</title>
<category>Paper</category>
<infohash>077943f332927f73167acaf86c83073fcdbb0f69</infohash>
<guid>https://academictorrents.com/details/077943f332927f73167acaf86c83073fcdbb0f69</guid>
<link>https://academictorrents.com/details/077943f332927f73167acaf86c83073fcdbb0f69</link>
<description/>
<size>231280</size>
</item><item>
<title>Robust Approximate Bilinear Programming for Value Function Approximation</title>
<category>Paper</category>
<infohash>453d9ce6d3a6f23b5909fbefbbab84b0c74bb855</infohash>
<guid>https://academictorrents.com/details/453d9ce6d3a6f23b5909fbefbbab84b0c74bb855</guid>
<link>https://academictorrents.com/details/453d9ce6d3a6f23b5909fbefbbab84b0c74bb855</link>
<description/>
<size>262335</size>
</item><item>
<title>On Using Extended Statistical Queries to Avoid Membership Queries</title>
<category>Paper</category>
<infohash>b0ba3a0e719c2b3ee91c38662dc0909c6786ab9b</infohash>
<guid>https://academictorrents.com/details/b0ba3a0e719c2b3ee91c38662dc0909c6786ab9b</guid>
<link>https://academictorrents.com/details/b0ba3a0e719c2b3ee91c38662dc0909c6786ab9b</link>
<description/>
<size>1478487</size>
</item><item>
<title>Support Vector Machine Active Learning with Applications to Text Classification</title>
<category>Paper</category>
<infohash>639b6fa478c23316be69c49ea4e42dfc3d73371d</infohash>
<guid>https://academictorrents.com/details/639b6fa478c23316be69c49ea4e42dfc3d73371d</guid>
<link>https://academictorrents.com/details/639b6fa478c23316be69c49ea4e42dfc3d73371d</link>
<description/>
<size>309683</size>
</item><item>
<title>NEUROSVM: An Architecture to Reduce the Effect of the Choice of Kernel on the Performance of SVM</title>
<category>Paper</category>
<infohash>a0c95108809d820da29be0b47242fcd7823d952e</infohash>
<guid>https://academictorrents.com/details/a0c95108809d820da29be0b47242fcd7823d952e</guid>
<link>https://academictorrents.com/details/a0c95108809d820da29be0b47242fcd7823d952e</link>
<description/>
<size>452726</size>
</item><item>
<title>JKernelMachines: A Simple Framework for Kernel Machines</title>
<category>Paper</category>
<infohash>525232019cb30357948aee5a5d0132c5788a739c</infohash>
<guid>https://academictorrents.com/details/525232019cb30357948aee5a5d0132c5788a739c</guid>
<link>https://academictorrents.com/details/525232019cb30357948aee5a5d0132c5788a739c</link>
<description/>
<size>69750</size>
</item><item>
<title>Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data</title>
<category>Paper</category>
<infohash>2cc1158f0537514d2050c24d07a09ac9bebfd3da</infohash>
<guid>https://academictorrents.com/details/2cc1158f0537514d2050c24d07a09ac9bebfd3da</guid>
<link>https://academictorrents.com/details/2cc1158f0537514d2050c24d07a09ac9bebfd3da</link>
<description/>
<size>259925</size>
</item><item>
<title>Text Chunking based on a Generalization of Winnow</title>
<category>Paper</category>
<infohash>1fe06c45173b62ff473cc3da044ec688e8c862a0</infohash>
<guid>https://academictorrents.com/details/1fe06c45173b62ff473cc3da044ec688e8c862a0</guid>
<link>https://academictorrents.com/details/1fe06c45173b62ff473cc3da044ec688e8c862a0</link>
<description/>
<size>199638</size>
</item><item>
<title>Infinite- Limits For Tikhonov Regularization</title>
<category>Paper</category>
<infohash>d9af67a76d835084cfec13a08989536613c5b34e</infohash>
<guid>https://academictorrents.com/details/d9af67a76d835084cfec13a08989536613c5b34e</guid>
<link>https://academictorrents.com/details/d9af67a76d835084cfec13a08989536613c5b34e</link>
<description/>
<size>223738</size>
</item><item>
<title>Learning a Mahalanobis Metric from Equivalence Constraints</title>
<category>Paper</category>
<infohash>0a1cf731c65a7252487bf400a035eee85da7319a</infohash>
<guid>https://academictorrents.com/details/0a1cf731c65a7252487bf400a035eee85da7319a</guid>
<link>https://academictorrents.com/details/0a1cf731c65a7252487bf400a035eee85da7319a</link>
<description/>
<size>868954</size>
</item><item>
<title>In All Likelihood, Deep Belief Is Not Enough</title>
<category>Paper</category>
<infohash>9f5e775d7ce6272037c3bd72df4390c493acfa7b</infohash>
<guid>https://academictorrents.com/details/9f5e775d7ce6272037c3bd72df4390c493acfa7b</guid>
<link>https://academictorrents.com/details/9f5e775d7ce6272037c3bd72df4390c493acfa7b</link>
<description/>
<size>447320</size>
</item><item>
<title>A Bayesian Approach for Learning and Planning in Partially Observable Markov Decision Processes</title>
<category>Paper</category>
<infohash>55f4ffc91509ab0f716cb86c642585a25bfb93cd</infohash>
<guid>https://academictorrents.com/details/55f4ffc91509ab0f716cb86c642585a25bfb93cd</guid>
<link>https://academictorrents.com/details/55f4ffc91509ab0f716cb86c642585a25bfb93cd</link>
<description/>
<size>372874</size>
</item><item>
<title>Discriminative Learning Under Covariate Shift</title>
<category>Paper</category>
<infohash>88f059942483326a34a7990450d440bba733ae4e</infohash>
<guid>https://academictorrents.com/details/88f059942483326a34a7990450d440bba733ae4e</guid>
<link>https://academictorrents.com/details/88f059942483326a34a7990450d440bba733ae4e</link>
<description/>
<size>165700</size>
</item><item>
<title>Efficient Program Synthesis Using Constraint Satisfaction in Inductive Logic Programming</title>
<category>Paper</category>
<infohash>d65a9e6281772cd06869014308ca69018f45e2f5</infohash>
<guid>https://academictorrents.com/details/d65a9e6281772cd06869014308ca69018f45e2f5</guid>
<link>https://academictorrents.com/details/d65a9e6281772cd06869014308ca69018f45e2f5</link>
<description/>
<size>278958</size>
</item><item>
<title>Differentially Private Empirical Risk Minimization</title>
<category>Paper</category>
<infohash>e81f29f96c44213cb50e33cf457d621a4a298a26</infohash>
<guid>https://academictorrents.com/details/e81f29f96c44213cb50e33cf457d621a4a298a26</guid>
<link>https://academictorrents.com/details/e81f29f96c44213cb50e33cf457d621a4a298a26</link>
<description/>
<size>339061</size>
</item><item>
<title>Spam Filtering Using Statistical Data Compression Models (Special Topic on Machine Learning for Computer Security)</title>
<category>Paper</category>
<infohash>cfb1b3100fcf1175e88124ae04b49533cdff2c87</infohash>
<guid>https://academictorrents.com/details/cfb1b3100fcf1175e88124ae04b49533cdff2c87</guid>
<link>https://academictorrents.com/details/cfb1b3100fcf1175e88124ae04b49533cdff2c87</link>
<description/>
<size>336310</size>
</item><item>
<title>On Model Selection Consistency of Lasso</title>
<category>Paper</category>
<infohash>9ca348a1e9b6a56f90e53ab1cedf7ab783f55eb6</infohash>
<guid>https://academictorrents.com/details/9ca348a1e9b6a56f90e53ab1cedf7ab783f55eb6</guid>
<link>https://academictorrents.com/details/9ca348a1e9b6a56f90e53ab1cedf7ab783f55eb6</link>
<description/>
<size>172939</size>
</item><item>
<title>Linear Programming Relaxations and Belief Propagation -- An Empirical Study (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>adc57fd2617de4ac0da9741eb465bab36f80d757</infohash>
<guid>https://academictorrents.com/details/adc57fd2617de4ac0da9741eb465bab36f80d757</guid>
<link>https://academictorrents.com/details/adc57fd2617de4ac0da9741eb465bab36f80d757</link>
<description/>
<size>802836</size>
</item><item>
<title>Stochastic Methods for l1-regularized Loss Minimization</title>
<category>Paper</category>
<infohash>4643cebc79b7e7f77ca33093c614cb171a4dac9c</infohash>
<guid>https://academictorrents.com/details/4643cebc79b7e7f77ca33093c614cb171a4dac9c</guid>
<link>https://academictorrents.com/details/4643cebc79b7e7f77ca33093c614cb171a4dac9c</link>
<description/>
<size>517561</size>
</item><item>
<title>MULAN: A Java Library for Multi-Label Learning</title>
<category>Paper</category>
<infohash>06e59f0717ac2e28a288ec03601d0de30f8bb28f</infohash>
<guid>https://academictorrents.com/details/06e59f0717ac2e28a288ec03601d0de30f8bb28f</guid>
<link>https://academictorrents.com/details/06e59f0717ac2e28a288ec03601d0de30f8bb28f</link>
<description/>
<size>29581</size>
</item><item>
<title>Information Rates of Nonparametric Gaussian Process Methods</title>
<category>Paper</category>
<infohash>73fdec12ac6011e192ec55f460dfba81a166d40e</infohash>
<guid>https://academictorrents.com/details/73fdec12ac6011e192ec55f460dfba81a166d40e</guid>
<link>https://academictorrents.com/details/73fdec12ac6011e192ec55f460dfba81a166d40e</link>
<description/>
<size>198200</size>
</item><item>
<title>Learning to Classify Ordinal Data: The Data Replication Method</title>
<category>Paper</category>
<infohash>8381325b62839851a03b66edfa8fc395a6c73abd</infohash>
<guid>https://academictorrents.com/details/8381325b62839851a03b66edfa8fc395a6c73abd</guid>
<link>https://academictorrents.com/details/8381325b62839851a03b66edfa8fc395a6c73abd</link>
<description/>
<size>448241</size>
</item><item>
<title>A Classification Framework for Anomaly Detection</title>
<category>Paper</category>
<infohash>95df4fceb716fe136e921db7821686a6f1665020</infohash>
<guid>https://academictorrents.com/details/95df4fceb716fe136e921db7821686a6f1665020</guid>
<link>https://academictorrents.com/details/95df4fceb716fe136e921db7821686a6f1665020</link>
<description/>
<size>181540</size>
</item><item>
<title>Unsupervised Supervised Learning II: Margin-Based Classification Without Labels</title>
<category>Paper</category>
<infohash>abfba3928f07f06af6d2f37f98c56731144689dd</infohash>
<guid>https://academictorrents.com/details/abfba3928f07f06af6d2f37f98c56731144689dd</guid>
<link>https://academictorrents.com/details/abfba3928f07f06af6d2f37f98c56731144689dd</link>
<description/>
<size>253973</size>
</item><item>
<title>Worst-Case Analysis of Selective Sampling for Linear Classification</title>
<category>Paper</category>
<infohash>6337ffb8648d6c0dd01b9d5463f86c66f7b1c96f</infohash>
<guid>https://academictorrents.com/details/6337ffb8648d6c0dd01b9d5463f86c66f7b1c96f</guid>
<link>https://academictorrents.com/details/6337ffb8648d6c0dd01b9d5463f86c66f7b1c96f</link>
<description/>
<size>184339</size>
</item><item>
<title>A Generalized Kernel Approach to Dissimilarity-based Classification (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>620871753fb3a80f0177dcc20a193648dbb8197d</infohash>
<guid>https://academictorrents.com/details/620871753fb3a80f0177dcc20a193648dbb8197d</guid>
<link>https://academictorrents.com/details/620871753fb3a80f0177dcc20a193648dbb8197d</link>
<description/>
<size>451295</size>
</item><item>
<title>Experiment Selection for Causal Discovery</title>
<category>Paper</category>
<infohash>695290318fd1fc5633f883f832bc089bdf840538</infohash>
<guid>https://academictorrents.com/details/695290318fd1fc5633f883f832bc089bdf840538</guid>
<link>https://academictorrents.com/details/695290318fd1fc5633f883f832bc089bdf840538</link>
<description/>
<size>1266907</size>
</item><item>
<title>Online Learning with Samples Drawn from Non-identical Distributions</title>
<category>Paper</category>
<infohash>bf4dff3c524cb89cad4cefa7d237abbf0278493e</infohash>
<guid>https://academictorrents.com/details/bf4dff3c524cb89cad4cefa7d237abbf0278493e</guid>
<link>https://academictorrents.com/details/bf4dff3c524cb89cad4cefa7d237abbf0278493e</link>
<description/>
<size>210126</size>
</item><item>
<title>Marginal Likelihood Integrals for Mixtures of Independence Models</title>
<category>Paper</category>
<infohash>60f075fed1a4fd3d039561039957fa80bdead00c</infohash>
<guid>https://academictorrents.com/details/60f075fed1a4fd3d039561039957fa80bdead00c</guid>
<link>https://academictorrents.com/details/60f075fed1a4fd3d039561039957fa80bdead00c</link>
<description/>
<size>177752</size>
</item><item>
<title>Improved Moves for Truncated Convex Models</title>
<category>Paper</category>
<infohash>9d1759049019d25a6c9cb965116e102a9692698b</infohash>
<guid>https://academictorrents.com/details/9d1759049019d25a6c9cb965116e102a9692698b</guid>
<link>https://academictorrents.com/details/9d1759049019d25a6c9cb965116e102a9692698b</link>
<description/>
<size>526599</size>
</item><item>
<title>Incremental Support Vector Learning: Analysis, Implementation and Applications (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>c10a3083a19620e20dd28eec55d39763695469c2</infohash>
<guid>https://academictorrents.com/details/c10a3083a19620e20dd28eec55d39763695469c2</guid>
<link>https://academictorrents.com/details/c10a3083a19620e20dd28eec55d39763695469c2</link>
<description/>
<size>259910</size>
</item><item>
<title>Learning with Decision Lists of Data-Dependent Features</title>
<category>Paper</category>
<infohash>bad76c32d58770622c34f7b34cfb61aaa72b7626</infohash>
<guid>https://academictorrents.com/details/bad76c32d58770622c34f7b34cfb61aaa72b7626</guid>
<link>https://academictorrents.com/details/bad76c32d58770622c34f7b34cfb61aaa72b7626</link>
<description/>
<size>196481</size>
</item><item>
<title>Kernel Regression in the Presence of Correlated Errors</title>
<category>Paper</category>
<infohash>5a2e5d42009b860ea5c3967332603f4f5aca49ea</infohash>
<guid>https://academictorrents.com/details/5a2e5d42009b860ea5c3967332603f4f5aca49ea</guid>
<link>https://academictorrents.com/details/5a2e5d42009b860ea5c3967332603f4f5aca49ea</link>
<description/>
<size>465077</size>
</item><item>
<title>Learning Latent Tree Graphical Models</title>
<category>Paper</category>
<infohash>230f679124442f5ce05668b6ebf8a8f9f27337df</infohash>
<guid>https://academictorrents.com/details/230f679124442f5ce05668b6ebf8a8f9f27337df</guid>
<link>https://academictorrents.com/details/230f679124442f5ce05668b6ebf8a8f9f27337df</link>
<description/>
<size>734638</size>
</item><item>
<title>Exploiting Best-Match Equations for Efficient Reinforcement Learning</title>
<category>Paper</category>
<infohash>70c84121a79df3e988df8f214c107352a7a04808</infohash>
<guid>https://academictorrents.com/details/70c84121a79df3e988df8f214c107352a7a04808</guid>
<link>https://academictorrents.com/details/70c84121a79df3e988df8f214c107352a7a04808</link>
<description/>
<size>495827</size>
</item><item>
<title>Parameter Screening and Optimisation for ILP using Designed Experiments</title>
<category>Paper</category>
<infohash>7ee4af34fd1bdd06a40e4a8d7d946f399c919444</infohash>
<guid>https://academictorrents.com/details/7ee4af34fd1bdd06a40e4a8d7d946f399c919444</guid>
<link>https://academictorrents.com/details/7ee4af34fd1bdd06a40e4a8d7d946f399c919444</link>
<description/>
<size>281421</size>
</item><item>
<title>Analysis of Variance of Cross-Validation Estimators of the Generalization Error</title>
<category>Paper</category>
<infohash>822c908e2655cccc42ffe1cdbdc3b79454294032</infohash>
<guid>https://academictorrents.com/details/822c908e2655cccc42ffe1cdbdc3b79454294032</guid>
<link>https://academictorrents.com/details/822c908e2655cccc42ffe1cdbdc3b79454294032</link>
<description/>
<size>294942</size>
</item><item>
<title> Ensemble Pruning Via Semi-definite Programming (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>dd46e5dd71f1d030fecbeaefe8f160db8a667c4c</infohash>
<guid>https://academictorrents.com/details/dd46e5dd71f1d030fecbeaefe8f160db8a667c4c</guid>
<link>https://academictorrents.com/details/dd46e5dd71f1d030fecbeaefe8f160db8a667c4c</link>
<description/>
<size>262394</size>
</item><item>
<title>Distance Dependent Chinese Restaurant Processes</title>
<category>Paper</category>
<infohash>e8ab6453351bea02ae0ba4ff3f6749bd4734a203</infohash>
<guid>https://academictorrents.com/details/e8ab6453351bea02ae0ba4ff3f6749bd4734a203</guid>
<link>https://academictorrents.com/details/e8ab6453351bea02ae0ba4ff3f6749bd4734a203</link>
<description/>
<size>1964718</size>
</item><item>
<title>Exploring Strategies for Training Deep Neural Networks</title>
<category>Paper</category>
<infohash>d712b334f78b0dc31820f9faaf9f3c59c70799b9</infohash>
<guid>https://academictorrents.com/details/d712b334f78b0dc31820f9faaf9f3c59c70799b9</guid>
<link>https://academictorrents.com/details/d712b334f78b0dc31820f9faaf9f3c59c70799b9</link>
<description/>
<size>973024</size>
</item><item>
<title>Approximate Marginals in Latent Gaussian Models</title>
<category>Paper</category>
<infohash>46b85abadd8047e372ad9d828267c3caf2010015</infohash>
<guid>https://academictorrents.com/details/46b85abadd8047e372ad9d828267c3caf2010015</guid>
<link>https://academictorrents.com/details/46b85abadd8047e372ad9d828267c3caf2010015</link>
<description/>
<size>3570298</size>
</item><item>
<title>Uniform Object Generation for Optimizing One-class Classifiers (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>a789d64967f290658051009c71ac12ce521f53f8</infohash>
<guid>https://academictorrents.com/details/a789d64967f290658051009c71ac12ce521f53f8</guid>
<link>https://academictorrents.com/details/a789d64967f290658051009c71ac12ce521f53f8</link>
<description/>
<size>188011</size>
</item><item>
<title>Efficient Computation of Gapped Substring Kernels on Large Alphabets</title>
<category>Paper</category>
<infohash>dff9129acc8eb68901209bc0e4403440780f2f52</infohash>
<guid>https://academictorrents.com/details/dff9129acc8eb68901209bc0e4403440780f2f52</guid>
<link>https://academictorrents.com/details/dff9129acc8eb68901209bc0e4403440780f2f52</link>
<description/>
<size>254613</size>
</item><item>
<title>Adaptive Prototype Learning Algorithms: Theoretical and Experimental Studies</title>
<category>Paper</category>
<infohash>044f66e5f830e8af633c12126e17b83e7ca11159</infohash>
<guid>https://academictorrents.com/details/044f66e5f830e8af633c12126e17b83e7ca11159</guid>
<link>https://academictorrents.com/details/044f66e5f830e8af633c12126e17b83e7ca11159</link>
<description/>
<size>389203</size>
</item><item>
<title>Non-Parametric Estimation of Topic Hierarchies from Texts with Hierarchical Dirichlet Processes</title>
<category>Paper</category>
<infohash>f0d7c8957eb8166c781afcb9d4cb800f88dc8d1c</infohash>
<guid>https://academictorrents.com/details/f0d7c8957eb8166c781afcb9d4cb800f88dc8d1c</guid>
<link>https://academictorrents.com/details/f0d7c8957eb8166c781afcb9d4cb800f88dc8d1c</link>
<description/>
<size>524924</size>
</item><item>
<title>A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization</title>
<category>Paper</category>
<infohash>30cff6756b819112cf94ba44f6f99d527f5a6bb7</infohash>
<guid>https://academictorrents.com/details/30cff6756b819112cf94ba44f6f99d527f5a6bb7</guid>
<link>https://academictorrents.com/details/30cff6756b819112cf94ba44f6f99d527f5a6bb7</link>
<description/>
<size>242084</size>
</item><item>
<title>Inner Product Spaces for Bayesian Networks</title>
<category>Paper</category>
<infohash>b328509aa436f2928fd95411b75892fa76c154b7</infohash>
<guid>https://academictorrents.com/details/b328509aa436f2928fd95411b75892fa76c154b7</guid>
<link>https://academictorrents.com/details/b328509aa436f2928fd95411b75892fa76c154b7</link>
<description/>
<size>164607</size>
</item><item>
<title>Nearest Neighbor Clustering: A Baseline Method for Consistent Clustering with Arbitrary Objective Functions</title>
<category>Paper</category>
<infohash>500f721722fc56105b977d78a9dbda71e6b95480</infohash>
<guid>https://academictorrents.com/details/500f721722fc56105b977d78a9dbda71e6b95480</guid>
<link>https://academictorrents.com/details/500f721722fc56105b977d78a9dbda71e6b95480</link>
<description/>
<size>303531</size>
</item><item>
<title>Information, Divergence and Risk for Binary Experiments</title>
<category>Paper</category>
<infohash>d905fed7d452becae1f8d4500845f53475ceb50e</infohash>
<guid>https://academictorrents.com/details/d905fed7d452becae1f8d4500845f53475ceb50e</guid>
<link>https://academictorrents.com/details/d905fed7d452becae1f8d4500845f53475ceb50e</link>
<description/>
<size>1505604</size>
</item><item>
<title>Natural Language Processing (Almost) from Scratch</title>
<category>Paper</category>
<infohash>824fd119b03225610249c0ce6ceae778dcb7e28d</infohash>
<guid>https://academictorrents.com/details/824fd119b03225610249c0ce6ceae778dcb7e28d</guid>
<link>https://academictorrents.com/details/824fd119b03225610249c0ce6ceae778dcb7e28d</link>
<description/>
<size>424780</size>
</item><item>
<title>Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection</title>
<category>Paper</category>
<infohash>45c959bd128b0d7c8f8bb56a142feace3ded16aa</infohash>
<guid>https://academictorrents.com/details/45c959bd128b0d7c8f8bb56a142feace3ded16aa</guid>
<link>https://academictorrents.com/details/45c959bd128b0d7c8f8bb56a142feace3ded16aa</link>
<description/>
<size>243076</size>
</item><item>
<title>Learning Permutations with Exponential Weights</title>
<category>Paper</category>
<infohash>371e6a8a2f445e8017eac1f9c94cdc7b60be0524</infohash>
<guid>https://academictorrents.com/details/371e6a8a2f445e8017eac1f9c94cdc7b60be0524</guid>
<link>https://academictorrents.com/details/371e6a8a2f445e8017eac1f9c94cdc7b60be0524</link>
<description/>
<size>246967</size>
</item><item>
<title>Learning a Robust Relevance Model for Search Using Kernel Methods</title>
<category>Paper</category>
<infohash>63c867e896288648b97055456a9bad2c6527c35d</infohash>
<guid>https://academictorrents.com/details/63c867e896288648b97055456a9bad2c6527c35d</guid>
<link>https://academictorrents.com/details/63c867e896288648b97055456a9bad2c6527c35d</link>
<description/>
<size>256735</size>
</item><item>
<title>The Indian Buffet Process: An Introduction and Review</title>
<category>Paper</category>
<infohash>4590e35ca648f463521828322f1d12393677505e</infohash>
<guid>https://academictorrents.com/details/4590e35ca648f463521828322f1d12393677505e</guid>
<link>https://academictorrents.com/details/4590e35ca648f463521828322f1d12393677505e</link>
<description/>
<size>407748</size>
</item><item>
<title>Adaptive Online Prediction by Following the Perturbed Leader</title>
<category>Paper</category>
<infohash>18f0cf1d83c83b51db477e84fd770d5ac05dbee6</infohash>
<guid>https://academictorrents.com/details/18f0cf1d83c83b51db477e84fd770d5ac05dbee6</guid>
<link>https://academictorrents.com/details/18f0cf1d83c83b51db477e84fd770d5ac05dbee6</link>
<description/>
<size>191610</size>
</item><item>
<title>Collaborative Multiagent Reinforcement Learning by Payoff Propagation</title>
<category>Paper</category>
<infohash>592208652eb5156681acc26f10c6521526830884</infohash>
<guid>https://academictorrents.com/details/592208652eb5156681acc26f10c6521526830884</guid>
<link>https://academictorrents.com/details/592208652eb5156681acc26f10c6521526830884</link>
<description/>
<size>521268</size>
</item><item>
<title>The Learning-Curve Sampling Method Applied to Model-Based Clustering</title>
<category>Paper</category>
<infohash>4438c8dba4c0e08576a6cbe1db3ef9ab04c9f922</infohash>
<guid>https://academictorrents.com/details/4438c8dba4c0e08576a6cbe1db3ef9ab04c9f922</guid>
<link>https://academictorrents.com/details/4438c8dba4c0e08576a6cbe1db3ef9ab04c9f922</link>
<description/>
<size>193580</size>
</item><item>
<title>Graph-Based Hierarchical Conceptual Clustering</title>
<category>Paper</category>
<infohash>a874a2fe203f2cce8fb34e3a2cc16b7f6b25fd32</infohash>
<guid>https://academictorrents.com/details/a874a2fe203f2cce8fb34e3a2cc16b7f6b25fd32</guid>
<link>https://academictorrents.com/details/a874a2fe203f2cce8fb34e3a2cc16b7f6b25fd32</link>
<description/>
<size>207105</size>
</item><item>
<title>Multi-Task Learning for Classification with Dirichlet Process Priors</title>
<category>Paper</category>
<infohash>38cf8e6441722f7ca42d2c2c28fe8248e2f55472</infohash>
<guid>https://academictorrents.com/details/38cf8e6441722f7ca42d2c2c28fe8248e2f55472</guid>
<link>https://academictorrents.com/details/38cf8e6441722f7ca42d2c2c28fe8248e2f55472</link>
<description/>
<size>250967</size>
</item><item>
<title>Efficient Learning with Partially Observed Attributes</title>
<category>Paper</category>
<infohash>04722103ad77ab5639f019773a828c82183f8f60</infohash>
<guid>https://academictorrents.com/details/04722103ad77ab5639f019773a828c82183f8f60</guid>
<link>https://academictorrents.com/details/04722103ad77ab5639f019773a828c82183f8f60</link>
<description/>
<size>187318</size>
</item><item>
<title>Multiple Kernel Learning Algorithms</title>
<category>Paper</category>
<infohash>83f0dc25b1c9175df96fbe168bd153226b4fa473</infohash>
<guid>https://academictorrents.com/details/83f0dc25b1c9175df96fbe168bd153226b4fa473</guid>
<link>https://academictorrents.com/details/83f0dc25b1c9175df96fbe168bd153226b4fa473</link>
<description/>
<size>395805</size>
</item><item>
<title>On the Representer Theorem and Equivalent Degrees of Freedom of SVR</title>
<category>Paper</category>
<infohash>a69d1eef1b59b1fe305e6951d80a6ef27608ef25</infohash>
<guid>https://academictorrents.com/details/a69d1eef1b59b1fe305e6951d80a6ef27608ef25</guid>
<link>https://academictorrents.com/details/a69d1eef1b59b1fe305e6951d80a6ef27608ef25</link>
<description/>
<size>463086</size>
</item><item>
<title>A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs</title>
<category>Paper</category>
<infohash>965bb1f04693a15ad715ed355c6d910bbb81a4dc</infohash>
<guid>https://academictorrents.com/details/965bb1f04693a15ad715ed355c6d910bbb81a4dc</guid>
<link>https://academictorrents.com/details/965bb1f04693a15ad715ed355c6d910bbb81a4dc</link>
<description/>
<size>157284</size>
</item><item>
<title>Efficient Learning of Label Ranking by Soft Projections onto Polyhedra (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>edbb2295a1ce9b8d0a55255c4ea01965b0be4e99</infohash>
<guid>https://academictorrents.com/details/edbb2295a1ce9b8d0a55255c4ea01965b0be4e99</guid>
<link>https://academictorrents.com/details/edbb2295a1ce9b8d0a55255c4ea01965b0be4e99</link>
<description/>
<size>572889</size>
</item><item>
<title>Variational Inference in Nonconjugate Models</title>
<category>Paper</category>
<infohash>de0d3b62fd9c1efd79ddf106dfe1036d602fa012</infohash>
<guid>https://academictorrents.com/details/de0d3b62fd9c1efd79ddf106dfe1036d602fa012</guid>
<link>https://academictorrents.com/details/de0d3b62fd9c1efd79ddf106dfe1036d602fa012</link>
<description/>
<size>500052</size>
</item><item>
<title>Generalization Bounds for the Area Under the ROC Curve</title>
<category>Paper</category>
<infohash>325a17176cefb598e7776562e69ef3dfc1fd2480</infohash>
<guid>https://academictorrents.com/details/325a17176cefb598e7776562e69ef3dfc1fd2480</guid>
<link>https://academictorrents.com/details/325a17176cefb598e7776562e69ef3dfc1fd2480</link>
<description/>
<size>259604</size>
</item><item>
<title>Efficient Structure Learning of Bayesian Networks using Constraints</title>
<category>Paper</category>
<infohash>b5d0a272f00e853c185784d22b3cb5f4c604b153</infohash>
<guid>https://academictorrents.com/details/b5d0a272f00e853c185784d22b3cb5f4c604b153</guid>
<link>https://academictorrents.com/details/b5d0a272f00e853c185784d22b3cb5f4c604b153</link>
<description/>
<size>215951</size>
</item><item>
<title>Learning from Partial Labels</title>
<category>Paper</category>
<infohash>09cf3400cedcc98b277bb073780faa79a0b80474</infohash>
<guid>https://academictorrents.com/details/09cf3400cedcc98b277bb073780faa79a0b80474</guid>
<link>https://academictorrents.com/details/09cf3400cedcc98b277bb073780faa79a0b80474</link>
<description/>
<size>1913469</size>
</item><item>
<title>Domain Decomposition Approach for Fast Gaussian Process Regression of Large Spatial Data Sets</title>
<category>Paper</category>
<infohash>59b1b88dedf188188da1681cc913c18ca9c97bec</infohash>
<guid>https://academictorrents.com/details/59b1b88dedf188188da1681cc913c18ca9c97bec</guid>
<link>https://academictorrents.com/details/59b1b88dedf188188da1681cc913c18ca9c97bec</link>
<description/>
<size>427930</size>
</item><item>
<title>Large Margin Semi-supervised Learning</title>
<category>Paper</category>
<infohash>d9e261b11c763abf68d852e6b6b824a7cca7b6de</infohash>
<guid>https://academictorrents.com/details/d9e261b11c763abf68d852e6b6b824a7cca7b6de</guid>
<link>https://academictorrents.com/details/d9e261b11c763abf68d852e6b6b824a7cca7b6de</link>
<description/>
<size>260985</size>
</item><item>
<title>Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm</title>
<category>Paper</category>
<infohash>9938d5a4bcf4e7986b1059cdb293a5beb65d6233</infohash>
<guid>https://academictorrents.com/details/9938d5a4bcf4e7986b1059cdb293a5beb65d6233</guid>
<link>https://academictorrents.com/details/9938d5a4bcf4e7986b1059cdb293a5beb65d6233</link>
<description/>
<size>199047</size>
</item><item>
<title>Structured Prediction, Dual Extragradient and Bregman Projections (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>161c197fd72aeb5eb6f06162174759bafacb6463</infohash>
<guid>https://academictorrents.com/details/161c197fd72aeb5eb6f06162174759bafacb6463</guid>
<link>https://academictorrents.com/details/161c197fd72aeb5eb6f06162174759bafacb6463</link>
<description/>
<size>534290</size>
</item><item>
<title>Learning Transformation Models for Ranking and Survival Analysis</title>
<category>Paper</category>
<infohash>d426f5586e5b5b79155a093659a49e45df66d736</infohash>
<guid>https://academictorrents.com/details/d426f5586e5b5b79155a093659a49e45df66d736</guid>
<link>https://academictorrents.com/details/d426f5586e5b5b79155a093659a49e45df66d736</link>
<description/>
<size>806046</size>
</item><item>
<title>Local Discriminant Wavelet Packet Coordinates for Face Recognition</title>
<category>Paper</category>
<infohash>1ebde5e7722bd9e993343df65704cf777d08f173</infohash>
<guid>https://academictorrents.com/details/1ebde5e7722bd9e993343df65704cf777d08f173</guid>
<link>https://academictorrents.com/details/1ebde5e7722bd9e993343df65704cf777d08f173</link>
<description/>
<size>371959</size>
</item><item>
<title>Stability Properties of Empirical Risk Minimization over Donsker Classes</title>
<category>Paper</category>
<infohash>164dfc2054da248f85722945ffb0777ddc89eddc</infohash>
<guid>https://academictorrents.com/details/164dfc2054da248f85722945ffb0777ddc89eddc</guid>
<link>https://academictorrents.com/details/164dfc2054da248f85722945ffb0777ddc89eddc</link>
<description/>
<size>164348</size>
</item><item>
<title>Kernel-Based Learning of Hierarchical Multilabel Classification Models (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>94712fb735291e85dc0ba56754845a08d5fff917</infohash>
<guid>https://academictorrents.com/details/94712fb735291e85dc0ba56754845a08d5fff917</guid>
<link>https://academictorrents.com/details/94712fb735291e85dc0ba56754845a08d5fff917</link>
<description/>
<size>193087</size>
</item><item>
<title>Introduction to Special Issue on Machine Learning Approaches to Shallow Parsing</title>
<category>Paper</category>
<infohash>62e4b0e2c43b84917900a16ba787a0bfe39d1101</infohash>
<guid>https://academictorrents.com/details/62e4b0e2c43b84917900a16ba787a0bfe39d1101</guid>
<link>https://academictorrents.com/details/62e4b0e2c43b84917900a16ba787a0bfe39d1101</link>
<description/>
<size>392170</size>
</item><item>
<title>Inductive Synthesis of Functional Programs: An Explanation Based Generalization Approach (Special Topic on Inductive Programming)</title>
<category>Paper</category>
<infohash>9892859c9259965676386476680f11f1f59976de</infohash>
<guid>https://academictorrents.com/details/9892859c9259965676386476680f11f1f59976de</guid>
<link>https://academictorrents.com/details/9892859c9259965676386476680f11f1f59976de</link>
<description/>
<size>258888</size>
</item><item>
<title>Laplacian Support Vector Machines Trained in the Primal</title>
<category>Paper</category>
<infohash>ca13f9759a4a9c77cb3eac5d644bd6e63976369f</infohash>
<guid>https://academictorrents.com/details/ca13f9759a4a9c77cb3eac5d644bd6e63976369f</guid>
<link>https://academictorrents.com/details/ca13f9759a4a9c77cb3eac5d644bd6e63976369f</link>
<description/>
<size>502275</size>
</item><item>
<title>Polynomial Identification in the Limit of Substitutable Context-free Languages</title>
<category>Paper</category>
<infohash>8582796ce77f239d88f9d65e026c7bf64f7ce642</infohash>
<guid>https://academictorrents.com/details/8582796ce77f239d88f9d65e026c7bf64f7ce642</guid>
<link>https://academictorrents.com/details/8582796ce77f239d88f9d65e026c7bf64f7ce642</link>
<description/>
<size>150260</size>
</item><item>
<title>Locally Defined Principal Curves and Surfaces</title>
<category>Paper</category>
<infohash>ca245e9710f3d0d8205c714dc2380642e0f1c879</infohash>
<guid>https://academictorrents.com/details/ca245e9710f3d0d8205c714dc2380642e0f1c879</guid>
<link>https://academictorrents.com/details/ca245e9710f3d0d8205c714dc2380642e0f1c879</link>
<description/>
<size>2874882</size>
</item><item>
<title>Computationally Efficient Convolved Multiple Output Gaussian Processes</title>
<category>Paper</category>
<infohash>8229e8964303fb5bb9e62ed070b79d1e2198a945</infohash>
<guid>https://academictorrents.com/details/8229e8964303fb5bb9e62ed070b79d1e2198a945</guid>
<link>https://academictorrents.com/details/8229e8964303fb5bb9e62ed070b79d1e2198a945</link>
<description/>
<size>646802</size>
</item><item>
<title>Producing Power-Law Distributions and Damping Word Frequencies with Two-Stage Language Models</title>
<category>Paper</category>
<infohash>f5c1883a0bbd33e97e7faa573d2acaf2eb34df6e</infohash>
<guid>https://academictorrents.com/details/f5c1883a0bbd33e97e7faa573d2acaf2eb34df6e</guid>
<link>https://academictorrents.com/details/f5c1883a0bbd33e97e7faa573d2acaf2eb34df6e</link>
<description/>
<size>560663</size>
</item><item>
<title>MinReg: A Scalable Algorithm for Learning Parsimonious Regulatory Networks in Yeast and Mammals</title>
<category>Paper</category>
<infohash>bb33c60ea59f55679d51c8070c51206265b93b09</infohash>
<guid>https://academictorrents.com/details/bb33c60ea59f55679d51c8070c51206265b93b09</guid>
<link>https://academictorrents.com/details/bb33c60ea59f55679d51c8070c51206265b93b09</link>
<description/>
<size>395928</size>
</item><item>
<title>Generalized Bradley-Terry Models and Multi-Class Probability Estimates</title>
<category>Paper</category>
<infohash>19815316b7b4aa16f6abf2bc563a05b995903c22</infohash>
<guid>https://academictorrents.com/details/19815316b7b4aa16f6abf2bc563a05b995903c22</guid>
<link>https://academictorrents.com/details/19815316b7b4aa16f6abf2bc563a05b995903c22</link>
<description/>
<size>402410</size>
</item><item>
<title>Integrating Nave Bayes and FOIL</title>
<category>Paper</category>
<infohash>8f7ffe6997860f719aed271668c1e1fdc0888647</infohash>
<guid>https://academictorrents.com/details/8f7ffe6997860f719aed271668c1e1fdc0888647</guid>
<link>https://academictorrents.com/details/8f7ffe6997860f719aed271668c1e1fdc0888647</link>
<description/>
<size>223183</size>
</item><item>
<title>Truncating the Loop Series Expansion for Belief Propagation</title>
<category>Paper</category>
<infohash>8a7a6032b5ba4accb6e64faf9a6d368df80654db</infohash>
<guid>https://academictorrents.com/details/8a7a6032b5ba4accb6e64faf9a6d368df80654db</guid>
<link>https://academictorrents.com/details/8a7a6032b5ba4accb6e64faf9a6d368df80654db</link>
<description/>
<size>1310581</size>
</item><item>
<title>A Hierarchy of Support Vector Machines for Pattern Detection</title>
<category>Paper</category>
<infohash>fdc712a3fbac1b24a3abbe391f720fcf68409a7b</infohash>
<guid>https://academictorrents.com/details/fdc712a3fbac1b24a3abbe391f720fcf68409a7b</guid>
<link>https://academictorrents.com/details/fdc712a3fbac1b24a3abbe391f720fcf68409a7b</link>
<description/>
<size>1653637</size>
</item><item>
<title>LPmade: Link Prediction Made Easy</title>
<category>Paper</category>
<infohash>8b1578efd4408a6f87ee11704b6b0077763cab40</infohash>
<guid>https://academictorrents.com/details/8b1578efd4408a6f87ee11704b6b0077763cab40</guid>
<link>https://academictorrents.com/details/8b1578efd4408a6f87ee11704b6b0077763cab40</link>
<description/>
<size>49331</size>
</item><item>
<title>Convex and Network Flow Optimization for Structured Sparsity</title>
<category>Paper</category>
<infohash>3c3a188d4c891f1dc8aa40ef2140e88b63efe684</infohash>
<guid>https://academictorrents.com/details/3c3a188d4c891f1dc8aa40ef2140e88b63efe684</guid>
<link>https://academictorrents.com/details/3c3a188d4c891f1dc8aa40ef2140e88b63efe684</link>
<description/>
<size>763932</size>
</item><item>
<title>Semi-Supervised Learning with Measure Propagation</title>
<category>Paper</category>
<infohash>7400e484c2bdc03641f6e277f868abee8f37f56f</infohash>
<guid>https://academictorrents.com/details/7400e484c2bdc03641f6e277f868abee8f37f56f</guid>
<link>https://academictorrents.com/details/7400e484c2bdc03641f6e277f868abee8f37f56f</link>
<description/>
<size>971519</size>
</item><item>
<title>Learning Parts-Based Representations of Data</title>
<category>Paper</category>
<infohash>0357c6b615807429d48a6e0d661a77fbef1ad122</infohash>
<guid>https://academictorrents.com/details/0357c6b615807429d48a6e0d661a77fbef1ad122</guid>
<link>https://academictorrents.com/details/0357c6b615807429d48a6e0d661a77fbef1ad122</link>
<description/>
<size>440259</size>
</item><item>
<title>Bounded Kernel-Based Online Learning</title>
<category>Paper</category>
<infohash>2aa2212325162e2d22c8fbb2790fb1b5d30fe275</infohash>
<guid>https://academictorrents.com/details/2aa2212325162e2d22c8fbb2790fb1b5d30fe275</guid>
<link>https://academictorrents.com/details/2aa2212325162e2d22c8fbb2790fb1b5d30fe275</link>
<description/>
<size>429447</size>
</item><item>
<title>Training SVMs Without Offset</title>
<category>Paper</category>
<infohash>ae48285c025e30986bbe68e197863f4857690b5b</infohash>
<guid>https://academictorrents.com/details/ae48285c025e30986bbe68e197863f4857690b5b</guid>
<link>https://academictorrents.com/details/ae48285c025e30986bbe68e197863f4857690b5b</link>
<description/>
<size>922835</size>
</item><item>
<title>MSVMpack: A Multi-Class Support Vector Machine Package</title>
<category>Paper</category>
<infohash>33ae95873171c2644aebd2858ef7da5c792c9262</infohash>
<guid>https://academictorrents.com/details/33ae95873171c2644aebd2858ef7da5c792c9262</guid>
<link>https://academictorrents.com/details/33ae95873171c2644aebd2858ef7da5c792c9262</link>
<description/>
<size>46009</size>
</item><item>
<title>Causal Graph Based Decomposition of Factored MDPs</title>
<category>Paper</category>
<infohash>386834e90cfa8d286a02e1aea2009b56f6a01ff0</infohash>
<guid>https://academictorrents.com/details/386834e90cfa8d286a02e1aea2009b56f6a01ff0</guid>
<link>https://academictorrents.com/details/386834e90cfa8d286a02e1aea2009b56f6a01ff0</link>
<description/>
<size>305908</size>
</item><item>
<title>Large Margin Hierarchical Classification with Mutually Exclusive Class Membership</title>
<category>Paper</category>
<infohash>c81e1a1ae19abcf4bbb3a51fe6134778fd0d1985</infohash>
<guid>https://academictorrents.com/details/c81e1a1ae19abcf4bbb3a51fe6134778fd0d1985</guid>
<link>https://academictorrents.com/details/c81e1a1ae19abcf4bbb3a51fe6134778fd0d1985</link>
<description/>
<size>269391</size>
</item><item>
<title>Hierarchical Average Reward Reinforcement Learning</title>
<category>Paper</category>
<infohash>a15142ab25e34c2e5b3af80186f528305070880a</infohash>
<guid>https://academictorrents.com/details/a15142ab25e34c2e5b3af80186f528305070880a</guid>
<link>https://academictorrents.com/details/a15142ab25e34c2e5b3af80186f528305070880a</link>
<description/>
<size>332685</size>
</item><item>
<title>Building Blocks for Variational Bayesian Learning of Latent Variable Models</title>
<category>Paper</category>
<infohash>b1637b1db97ed934c05b61b6aed29743647cb76c</infohash>
<guid>https://academictorrents.com/details/b1637b1db97ed934c05b61b6aed29743647cb76c</guid>
<link>https://academictorrents.com/details/b1637b1db97ed934c05b61b6aed29743647cb76c</link>
<description/>
<size>426827</size>
</item><item>
<title>Inverse Reinforcement Learning in Partially Observable Environments</title>
<category>Paper</category>
<infohash>3b46631410d926a0424de8036eadedd16773e1a5</infohash>
<guid>https://academictorrents.com/details/3b46631410d926a0424de8036eadedd16773e1a5</guid>
<link>https://academictorrents.com/details/3b46631410d926a0424de8036eadedd16773e1a5</link>
<description/>
<size>626031</size>
</item><item>
<title>"Ideal Parent" Structure Learning for Continuous Variable Bayesian Networks</title>
<category>Paper</category>
<infohash>2ad4d685df8fdcd5f6b735fd3a0e8295f0501b47</infohash>
<guid>https://academictorrents.com/details/2ad4d685df8fdcd5f6b735fd3a0e8295f0501b47</guid>
<link>https://academictorrents.com/details/2ad4d685df8fdcd5f6b735fd3a0e8295f0501b47</link>
<description/>
<size>663195</size>
</item><item>
<title>Building Support Vector Machines with Reduced Classifier Complexity (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>f91fa40ea6d17f2fa2b020ff21838ae5a8fef79b</infohash>
<guid>https://academictorrents.com/details/f91fa40ea6d17f2fa2b020ff21838ae5a8fef79b</guid>
<link>https://academictorrents.com/details/f91fa40ea6d17f2fa2b020ff21838ae5a8fef79b</link>
<description/>
<size>244174</size>
</item><item>
<title>QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines</title>
<category>Paper</category>
<infohash>2aa615cad7ebba3dab42edf5f6a62fb805fcf28d</infohash>
<guid>https://academictorrents.com/details/2aa615cad7ebba3dab42edf5f6a62fb805fcf28d</guid>
<link>https://academictorrents.com/details/2aa615cad7ebba3dab42edf5f6a62fb805fcf28d</link>
<description/>
<size>322124</size>
</item><item>
<title>Generalized TD Learning</title>
<category>Paper</category>
<infohash>7f095755a42c38cd1dfa2f08bcd8a6371249cc5e</infohash>
<guid>https://academictorrents.com/details/7f095755a42c38cd1dfa2f08bcd8a6371249cc5e</guid>
<link>https://academictorrents.com/details/7f095755a42c38cd1dfa2f08bcd8a6371249cc5e</link>
<description/>
<size>400392</size>
</item><item>
<title>Walk-Sums and Belief Propagation in Gaussian Graphical Models</title>
<category>Paper</category>
<infohash>0f7ad02c15111c41922fc6f445fda383f760ce09</infohash>
<guid>https://academictorrents.com/details/0f7ad02c15111c41922fc6f445fda383f760ce09</guid>
<link>https://academictorrents.com/details/0f7ad02c15111c41922fc6f445fda383f760ce09</link>
<description/>
<size>345369</size>
</item><item>
<title>Asymptotics in Empirical Risk Minimization</title>
<category>Paper</category>
<infohash>b8cfa79adb8e35287c9ace14cecceb964cb808b0</infohash>
<guid>https://academictorrents.com/details/b8cfa79adb8e35287c9ace14cecceb964cb808b0</guid>
<link>https://academictorrents.com/details/b8cfa79adb8e35287c9ace14cecceb964cb808b0</link>
<description/>
<size>150529</size>
</item><item>
<title>Neyman-Pearson Classification, Convexity and Stochastic Constraints</title>
<category>Paper</category>
<infohash>c155687e5522ae11e5ec2d018a96e0c27a8d9acc</infohash>
<guid>https://academictorrents.com/details/c155687e5522ae11e5ec2d018a96e0c27a8d9acc</guid>
<link>https://academictorrents.com/details/c155687e5522ae11e5ec2d018a96e0c27a8d9acc</link>
<description/>
<size>187159</size>
</item><item>
<title>In Search of Non-Gaussian Components of a High-Dimensional Distribution</title>
<category>Paper</category>
<infohash>10a165ce9747da34cb29632b9a5cce7912fd8ddc</infohash>
<guid>https://academictorrents.com/details/10a165ce9747da34cb29632b9a5cce7912fd8ddc</guid>
<link>https://academictorrents.com/details/10a165ce9747da34cb29632b9a5cce7912fd8ddc</link>
<description/>
<size>1276281</size>
</item><item>
<title>Concave Learners for Rankboost</title>
<category>Paper</category>
<infohash>c27abdf703fdc5f4ae00779f3cd94aaa1a9fabf9</infohash>
<guid>https://academictorrents.com/details/c27abdf703fdc5f4ae00779f3cd94aaa1a9fabf9</guid>
<link>https://academictorrents.com/details/c27abdf703fdc5f4ae00779f3cd94aaa1a9fabf9</link>
<description/>
<size>170755</size>
</item><item>
<title>Unsupervised Similarity-Based Risk Stratification for Cardiovascular Events Using Long-Term Time-Series Data</title>
<category>Paper</category>
<infohash>645268cc6d67bed015329f662a1fe0728cb50fcb</infohash>
<guid>https://academictorrents.com/details/645268cc6d67bed015329f662a1fe0728cb50fcb</guid>
<link>https://academictorrents.com/details/645268cc6d67bed015329f662a1fe0728cb50fcb</link>
<description/>
<size>276065</size>
</item><item>
<title>A Refined Margin Analysis for Boosting Algorithms via Equilibrium Margin</title>
<category>Paper</category>
<infohash>9df16aa64cc9fd65e8b97b8c49824c021dfd18fd</infohash>
<guid>https://academictorrents.com/details/9df16aa64cc9fd65e8b97b8c49824c021dfd18fd</guid>
<link>https://academictorrents.com/details/9df16aa64cc9fd65e8b97b8c49824c021dfd18fd</link>
<description/>
<size>244701</size>
</item><item>
<title>Proximal Methods for Hierarchical Sparse Coding</title>
<category>Paper</category>
<infohash>d0d17069650e8d58979b4a2230c9c4ceab89323d</infohash>
<guid>https://academictorrents.com/details/d0d17069650e8d58979b4a2230c9c4ceab89323d</guid>
<link>https://academictorrents.com/details/d0d17069650e8d58979b4a2230c9c4ceab89323d</link>
<description/>
<size>611345</size>
</item><item>
<title>The arules R-Package Ecosystem: Analyzing Interesting Patterns from Large Transaction Data Sets</title>
<category>Paper</category>
<infohash>7e5c3ac40b1b0bea28dcaa6cb5e9c05e5b3a68ef</infohash>
<guid>https://academictorrents.com/details/7e5c3ac40b1b0bea28dcaa6cb5e9c05e5b3a68ef</guid>
<link>https://academictorrents.com/details/7e5c3ac40b1b0bea28dcaa6cb5e9c05e5b3a68ef</link>
<description/>
<size>165055</size>
</item><item>
<title>Logistic Stick-Breaking Process</title>
<category>Paper</category>
<infohash>1b0d420bb3d183804b33d20aca0df7dd77c745c8</infohash>
<guid>https://academictorrents.com/details/1b0d420bb3d183804b33d20aca0df7dd77c745c8</guid>
<link>https://academictorrents.com/details/1b0d420bb3d183804b33d20aca0df7dd77c745c8</link>
<description/>
<size>2248158</size>
</item><item>
<title>Fast SDP Relaxations of Graph Cut Clustering, Transduction, and Other Combinatorial Problems (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>1fae6dbfbf47b107eb5aacd990d57c63431ad610</infohash>
<guid>https://academictorrents.com/details/1fae6dbfbf47b107eb5aacd990d57c63431ad610</guid>
<link>https://academictorrents.com/details/1fae6dbfbf47b107eb5aacd990d57c63431ad610</link>
<description/>
<size>1037500</size>
</item><item>
<title>Maximum-Gain Working Set Selection for SVMs (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>e0bd2a8edfc23f87a1076cfe42967ac99236947a</infohash>
<guid>https://academictorrents.com/details/e0bd2a8edfc23f87a1076cfe42967ac99236947a</guid>
<link>https://academictorrents.com/details/e0bd2a8edfc23f87a1076cfe42967ac99236947a</link>
<description/>
<size>252094</size>
</item><item>
<title>Online Learning of Multiple Tasks with a Shared Loss</title>
<category>Paper</category>
<infohash>ac5ee290188aea1123fc1316b8103da17d8c253a</infohash>
<guid>https://academictorrents.com/details/ac5ee290188aea1123fc1316b8103da17d8c253a</guid>
<link>https://academictorrents.com/details/ac5ee290188aea1123fc1316b8103da17d8c253a</link>
<description/>
<size>259615</size>
</item><item>
<title>Noisy-OR Component Analysis and its Application to Link Analysis</title>
<category>Paper</category>
<infohash>d07347babdc3166470e0d5a755d7533a5cf8adb0</infohash>
<guid>https://academictorrents.com/details/d07347babdc3166470e0d5a755d7533a5cf8adb0</guid>
<link>https://academictorrents.com/details/d07347babdc3166470e0d5a755d7533a5cf8adb0</link>
<description/>
<size>529546</size>
</item><item>
<title>Learning High-Dimensional Markov Forest Distributions: Analysis of Error Rates</title>
<category>Paper</category>
<infohash>2323614427d18622693c0bc286ed28341735f35e</infohash>
<guid>https://academictorrents.com/details/2323614427d18622693c0bc286ed28341735f35e</guid>
<link>https://academictorrents.com/details/2323614427d18622693c0bc286ed28341735f35e</link>
<description/>
<size>331367</size>
</item><item>
<title>Segmental Hidden Markov Models with Random Effects for Waveform Modeling</title>
<category>Paper</category>
<infohash>590e4885c3053387cb96a1322d74adead690b025</infohash>
<guid>https://academictorrents.com/details/590e4885c3053387cb96a1322d74adead690b025</guid>
<link>https://academictorrents.com/details/590e4885c3053387cb96a1322d74adead690b025</link>
<description/>
<size>529592</size>
</item><item>
<title>Point-Based Value Iteration for Continuous POMDPs</title>
<category>Paper</category>
<infohash>87ef9ab0ffb6ce89a2f2b4893fce0e822aea1ec3</infohash>
<guid>https://academictorrents.com/details/87ef9ab0ffb6ce89a2f2b4893fce0e822aea1ec3</guid>
<link>https://academictorrents.com/details/87ef9ab0ffb6ce89a2f2b4893fce0e822aea1ec3</link>
<description/>
<size>527864</size>
</item><item>
<title>A Graphical Representation of Equivalence Classes of AMP Chain Graphs</title>
<category>Paper</category>
<infohash>39c2fdfdd24673eead933a415f84a238bbdb8273</infohash>
<guid>https://academictorrents.com/details/39c2fdfdd24673eead933a415f84a238bbdb8273</guid>
<link>https://academictorrents.com/details/39c2fdfdd24673eead933a415f84a238bbdb8273</link>
<description/>
<size>308103</size>
</item><item>
<title>Learning Factor Graphs in Polynomial Time and Sample Complexity</title>
<category>Paper</category>
<infohash>06e17519271a6492d76f0a59189a7ee7f5488a2c</infohash>
<guid>https://academictorrents.com/details/06e17519271a6492d76f0a59189a7ee7f5488a2c</guid>
<link>https://academictorrents.com/details/06e17519271a6492d76f0a59189a7ee7f5488a2c</link>
<description/>
<size>355632</size>
</item><item>
<title>Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>cd35083d3efcbc4eb0212f92831346b724f204d1</infohash>
<guid>https://academictorrents.com/details/cd35083d3efcbc4eb0212f92831346b724f204d1</guid>
<link>https://academictorrents.com/details/cd35083d3efcbc4eb0212f92831346b724f204d1</link>
<description/>
<size>200296</size>
</item><item>
<title>Combining PAC-Bayesian and Generic Chaining Bounds</title>
<category>Paper</category>
<infohash>a65b847b6fdd44bf84a6a7c5852c30251efcabe8</infohash>
<guid>https://academictorrents.com/details/a65b847b6fdd44bf84a6a7c5852c30251efcabe8</guid>
<link>https://academictorrents.com/details/a65b847b6fdd44bf84a6a7c5852c30251efcabe8</link>
<description/>
<size>237118</size>
</item><item>
<title>On the Complexity of Learning Lexicographic Strategies</title>
<category>Paper</category>
<infohash>892b58aa3a2bf66f9b96294f4f5124b9785d7243</infohash>
<guid>https://academictorrents.com/details/892b58aa3a2bf66f9b96294f4f5124b9785d7243</guid>
<link>https://academictorrents.com/details/892b58aa3a2bf66f9b96294f4f5124b9785d7243</link>
<description/>
<size>203247</size>
</item><item>
<title>Using Machine Learning to Guide Architecture Simulation</title>
<category>Paper</category>
<infohash>3405e560f80ca053f80116b7e3fef17583f2e846</infohash>
<guid>https://academictorrents.com/details/3405e560f80ca053f80116b7e3fef17583f2e846</guid>
<link>https://academictorrents.com/details/3405e560f80ca053f80116b7e3fef17583f2e846</link>
<description/>
<size>809372</size>
</item><item>
<title>Nested Expectation Propagation for Gaussian Process Classification with a Multinomial Probit Likelihood</title>
<category>Paper</category>
<infohash>e4f58dc83c8dff5c08b8faac05907c971b4f9d25</infohash>
<guid>https://academictorrents.com/details/e4f58dc83c8dff5c08b8faac05907c971b4f9d25</guid>
<link>https://academictorrents.com/details/e4f58dc83c8dff5c08b8faac05907c971b4f9d25</link>
<description/>
<size>1200268</size>
</item><item>
<title>Learning the Structure of Linear Latent Variable Models</title>
<category>Paper</category>
<infohash>4e3120adac0a09818bca00485cc0e5511b9cdaca</infohash>
<guid>https://academictorrents.com/details/4e3120adac0a09818bca00485cc0e5511b9cdaca</guid>
<link>https://academictorrents.com/details/4e3120adac0a09818bca00485cc0e5511b9cdaca</link>
<description/>
<size>451786</size>
</item><item>
<title>Approximating the Permanent with Fractional Belief Propagation</title>
<category>Paper</category>
<infohash>2dace21ae42a298afec2e0a205257bf94b7a22d7</infohash>
<guid>https://academictorrents.com/details/2dace21ae42a298afec2e0a205257bf94b7a22d7</guid>
<link>https://academictorrents.com/details/2dace21ae42a298afec2e0a205257bf94b7a22d7</link>
<description/>
<size>4157733</size>
</item><item>
<title>Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems</title>
<category>Paper</category>
<infohash>e55591692712e84ec086b796c930d5182b77c7bc</infohash>
<guid>https://academictorrents.com/details/e55591692712e84ec086b796c930d5182b77c7bc</guid>
<link>https://academictorrents.com/details/e55591692712e84ec086b796c930d5182b77c7bc</link>
<description/>
<size>206079</size>
</item><item>
<title>Streamwise Feature Selection</title>
<category>Paper</category>
<infohash>ffb340ed10e7cc97d2bb31a5ebc61d47216d02cf</infohash>
<guid>https://academictorrents.com/details/ffb340ed10e7cc97d2bb31a5ebc61d47216d02cf</guid>
<link>https://academictorrents.com/details/ffb340ed10e7cc97d2bb31a5ebc61d47216d02cf</link>
<description/>
<size>163855</size>
</item><item>
<title>Particle Swarm Model Selection(Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>c0d6ae7d64439abec77021312c7fa9fff0b7a063</infohash>
<guid>https://academictorrents.com/details/c0d6ae7d64439abec77021312c7fa9fff0b7a063</guid>
<link>https://academictorrents.com/details/c0d6ae7d64439abec77021312c7fa9fff0b7a063</link>
<description/>
<size>563417</size>
</item><item>
<title>Second Order Cone Programming Approaches for Handling Missing and Uncertain Data (Special Topic on Machine Learning and Optimization)</title>
<category>Paper</category>
<infohash>9135afd552fbbae3cffb883294da5b29bad5feb0</infohash>
<guid>https://academictorrents.com/details/9135afd552fbbae3cffb883294da5b29bad5feb0</guid>
<link>https://academictorrents.com/details/9135afd552fbbae3cffb883294da5b29bad5feb0</link>
<description/>
<size>429291</size>
</item><item>
<title>Fourier Theoretic Probabilistic Inference over Permutations</title>
<category>Paper</category>
<infohash>d9c86f4df1aa7b0173662a5a7d8058eca5942ca8</infohash>
<guid>https://academictorrents.com/details/d9c86f4df1aa7b0173662a5a7d8058eca5942ca8</guid>
<link>https://academictorrents.com/details/d9c86f4df1aa7b0173662a5a7d8058eca5942ca8</link>
<description/>
<size>696975</size>
</item><item>
<title>Covariate Shift Adaptation by Importance Weighted Cross Validation</title>
<category>Paper</category>
<infohash>633416b143a9e6ef89d9fb138179a72712a5bedf</infohash>
<guid>https://academictorrents.com/details/633416b143a9e6ef89d9fb138179a72712a5bedf</guid>
<link>https://academictorrents.com/details/633416b143a9e6ef89d9fb138179a72712a5bedf</link>
<description/>
<size>273836</size>
</item><item>
<title>Synergistic Face Detection and Pose Estimation with Energy-Based Models</title>
<category>Paper</category>
<infohash>78d2322a3b1b84e7de747110d9d271c918b32496</infohash>
<guid>https://academictorrents.com/details/78d2322a3b1b84e7de747110d9d271c918b32496</guid>
<link>https://academictorrents.com/details/78d2322a3b1b84e7de747110d9d271c918b32496</link>
<description/>
<size>2242493</size>
</item><item>
<title>Improving the Reliability of Causal Discovery from Small Data Sets Using Argumentation(Special Topic on Causality)</title>
<category>Paper</category>
<infohash>d98191e0b45edf22fa510113fb6315156c371d16</infohash>
<guid>https://academictorrents.com/details/d98191e0b45edf22fa510113fb6315156c371d16</guid>
<link>https://academictorrents.com/details/d98191e0b45edf22fa510113fb6315156c371d16</link>
<description/>
<size>325202</size>
</item><item>
<title>Sparse Boosting</title>
<category>Paper</category>
<infohash>c3cc042283428b9fce9114e881968a362fd71d56</infohash>
<guid>https://academictorrents.com/details/c3cc042283428b9fce9114e881968a362fd71d56</guid>
<link>https://academictorrents.com/details/c3cc042283428b9fce9114e881968a362fd71d56</link>
<description/>
<size>242098</size>
</item><item>
<title>Consistency and Convergence Rates of One-Class SVMs and Related Algorithms</title>
<category>Paper</category>
<infohash>0490e9e2abbba7c1f79d8f9ce5e671ec08abe061</infohash>
<guid>https://academictorrents.com/details/0490e9e2abbba7c1f79d8f9ce5e671ec08abe061</guid>
<link>https://academictorrents.com/details/0490e9e2abbba7c1f79d8f9ce5e671ec08abe061</link>
<description/>
<size>258632</size>
</item><item>
<title>Online Passive-Aggressive Algorithms</title>
<category>Paper</category>
<infohash>8203bc0f1ce7b42b06537d78b8b1315154813abb</infohash>
<guid>https://academictorrents.com/details/8203bc0f1ce7b42b06537d78b8b1315154813abb</guid>
<link>https://academictorrents.com/details/8203bc0f1ce7b42b06537d78b8b1315154813abb</link>
<description/>
<size>372767</size>
</item><item>
<title>Feature Selection for Unsupervised and Supervised Inference: The Emergence of Sparsity in a Weight-Based Approach</title>
<category>Paper</category>
<infohash>8759bd084281cf78c6045dc3191e47666a6923bc</infohash>
<guid>https://academictorrents.com/details/8759bd084281cf78c6045dc3191e47666a6923bc</guid>
<link>https://academictorrents.com/details/8759bd084281cf78c6045dc3191e47666a6923bc</link>
<description/>
<size>402625</size>
</item><item>
<title>Relational Dependency Networks</title>
<category>Paper</category>
<infohash>a970d35f66fb5c847726dccbba20859157f0f6f3</infohash>
<guid>https://academictorrents.com/details/a970d35f66fb5c847726dccbba20859157f0f6f3</guid>
<link>https://academictorrents.com/details/a970d35f66fb5c847726dccbba20859157f0f6f3</link>
<description/>
<size>2921412</size>
</item><item>
<title>Transfer Learning via Inter-Task Mappings for Temporal Difference Learning</title>
<category>Paper</category>
<infohash>1e77af32aaf43c32b30406d455c3ba4f85585024</infohash>
<guid>https://academictorrents.com/details/1e77af32aaf43c32b30406d455c3ba4f85585024</guid>
<link>https://academictorrents.com/details/1e77af32aaf43c32b30406d455c3ba4f85585024</link>
<description/>
<size>511941</size>
</item><item>
<title>Maximum Volume Clustering: A New Discriminative Clustering Approach</title>
<category>Paper</category>
<infohash>56b59a7aca9c896a2b318b965880702cba7ca4eb</infohash>
<guid>https://academictorrents.com/details/56b59a7aca9c896a2b318b965880702cba7ca4eb</guid>
<link>https://academictorrents.com/details/56b59a7aca9c896a2b318b965880702cba7ca4eb</link>
<description/>
<size>531380</size>
</item><item>
<title>Minimax Regret Classifier for Imprecise Class Distributions</title>
<category>Paper</category>
<infohash>5d539a6f725c0f012fe158f786b1238d9e92a4a8</infohash>
<guid>https://academictorrents.com/details/5d539a6f725c0f012fe158f786b1238d9e92a4a8</guid>
<link>https://academictorrents.com/details/5d539a6f725c0f012fe158f786b1238d9e92a4a8</link>
<description/>
<size>329705</size>
</item><item>
<title>Multi-class Protein Classification Using Adaptive Codes</title>
<category>Paper</category>
<infohash>a674b3f64e8c7cfd1949ba8e5f76726c6f0f5382</infohash>
<guid>https://academictorrents.com/details/a674b3f64e8c7cfd1949ba8e5f76726c6f0f5382</guid>
<link>https://academictorrents.com/details/a674b3f64e8c7cfd1949ba8e5f76726c6f0f5382</link>
<description/>
<size>316488</size>
</item><item>
<title>Active Coevolutionary Learning of Deterministic Finite Automata</title>
<category>Paper</category>
<infohash>292679736beef01ce8ac8e6e69069d8cbeef2bf5</infohash>
<guid>https://academictorrents.com/details/292679736beef01ce8ac8e6e69069d8cbeef2bf5</guid>
<link>https://academictorrents.com/details/292679736beef01ce8ac8e6e69069d8cbeef2bf5</link>
<description/>
<size>238246</size>
</item><item>
<title>The Pyramid Match Kernel: Efficient Learning with Sets of Features</title>
<category>Paper</category>
<infohash>ac831f277885bb169067d73f660e092361daa8ec</infohash>
<guid>https://academictorrents.com/details/ac831f277885bb169067d73f660e092361daa8ec</guid>
<link>https://academictorrents.com/details/ac831f277885bb169067d73f660e092361daa8ec</link>
<description/>
<size>7961938</size>
</item><item>
<title>Reproducing Kernel Banach Spaces for Machine Learning</title>
<category>Paper</category>
<infohash>d357547afa430de702b98cf13e85225b11c697d4</infohash>
<guid>https://academictorrents.com/details/d357547afa430de702b98cf13e85225b11c697d4</guid>
<link>https://academictorrents.com/details/d357547afa430de702b98cf13e85225b11c697d4</link>
<description/>
<size>245376</size>
</item><item>
<title>Measuring Differentiability: Unmasking Pseudonymous Authors</title>
<category>Paper</category>
<infohash>ffd9193872a467ea32699c8493efb02ca0f40fdd</infohash>
<guid>https://academictorrents.com/details/ffd9193872a467ea32699c8493efb02ca0f40fdd</guid>
<link>https://academictorrents.com/details/ffd9193872a467ea32699c8493efb02ca0f40fdd</link>
<description/>
<size>179223</size>
</item><item>
<title>Toward Attribute Efficient Learning of Decision Lists and Parities</title>
<category>Paper</category>
<infohash>2edbe151e1ba2a0c63b6c4367849e26cb2f6f21c</infohash>
<guid>https://academictorrents.com/details/2edbe151e1ba2a0c63b6c4367849e26cb2f6f21c</guid>
<link>https://academictorrents.com/details/2edbe151e1ba2a0c63b6c4367849e26cb2f6f21c</link>
<description/>
<size>129715</size>
</item><item>
<title>Regularization-Free Principal Curve Estimation</title>
<category>Paper</category>
<infohash>b506cdd629ea8c0c340cf02b46645a8ebc675ef8</infohash>
<guid>https://academictorrents.com/details/b506cdd629ea8c0c340cf02b46645a8ebc675ef8</guid>
<link>https://academictorrents.com/details/b506cdd629ea8c0c340cf02b46645a8ebc675ef8</link>
<description/>
<size>1298327</size>
</item><item>
<title>Algorithms and Hardness Results for Parallel Large Margin Learning</title>
<category>Paper</category>
<infohash>1c4b00397b095763f3dd14fb916b5dd391c485f6</infohash>
<guid>https://academictorrents.com/details/1c4b00397b095763f3dd14fb916b5dd391c485f6</guid>
<link>https://academictorrents.com/details/1c4b00397b095763f3dd14fb916b5dd391c485f6</link>
<description/>
<size>222173</size>
</item><item>
<title>Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification</title>
<category>Paper</category>
<infohash>17aad516d43bf716616f072927901baef114b835</infohash>
<guid>https://academictorrents.com/details/17aad516d43bf716616f072927901baef114b835</guid>
<link>https://academictorrents.com/details/17aad516d43bf716616f072927901baef114b835</link>
<description/>
<size>478675</size>
</item><item>
<title>Convergence Theorems for Generalized Alternating Minimization Procedures</title>
<category>Paper</category>
<infohash>ad9358933c47d5b4862f97d9a0788346d3c92283</infohash>
<guid>https://academictorrents.com/details/ad9358933c47d5b4862f97d9a0788346d3c92283</guid>
<link>https://academictorrents.com/details/ad9358933c47d5b4862f97d9a0788346d3c92283</link>
<description/>
<size>170834</size>
</item><item>
<title>Change Point Problems in Linear Dynamical Systems</title>
<category>Paper</category>
<infohash>2b6280b26e86beb1b4f456224dea97ceed2fd78e</infohash>
<guid>https://academictorrents.com/details/2b6280b26e86beb1b4f456224dea97ceed2fd78e</guid>
<link>https://academictorrents.com/details/2b6280b26e86beb1b4f456224dea97ceed2fd78e</link>
<description/>
<size>230780</size>
</item><item>
<title>Fast Kernel Classifiers with Online and Active Learning</title>
<category>Paper</category>
<infohash>e19fecddcaf73540fc5e63d10d36b964763ae0c2</infohash>
<guid>https://academictorrents.com/details/e19fecddcaf73540fc5e63d10d36b964763ae0c2</guid>
<link>https://academictorrents.com/details/e19fecddcaf73540fc5e63d10d36b964763ae0c2</link>
<description/>
<size>507261</size>
</item><item>
<title>Loopy Belief Propagation: Convergence and Effects of Message Errors</title>
<category>Paper</category>
<infohash>ee3ee33ffc7f2f65940e4f7f7071c3335eb70af2</infohash>
<guid>https://academictorrents.com/details/ee3ee33ffc7f2f65940e4f7f7071c3335eb70af2</guid>
<link>https://academictorrents.com/details/ee3ee33ffc7f2f65940e4f7f7071c3335eb70af2</link>
<description/>
<size>354430</size>
</item><item>
<title>The On-Line Shortest Path Problem Under Partial Monitoring</title>
<category>Paper</category>
<infohash>4bbf7327eb4fa9d8b3c5446ec672b47ca1ee7930</infohash>
<guid>https://academictorrents.com/details/4bbf7327eb4fa9d8b3c5446ec672b47ca1ee7930</guid>
<link>https://academictorrents.com/details/4bbf7327eb4fa9d8b3c5446ec672b47ca1ee7930</link>
<description/>
<size>389686</size>
</item><item>
<title>Universal Algorithms for Learning Theory Part I : Piecewise Constant Functions</title>
<category>Paper</category>
<infohash>7ff1031a0652dc89df2e421e96f5db3e329b295f</infohash>
<guid>https://academictorrents.com/details/7ff1031a0652dc89df2e421e96f5db3e329b295f</guid>
<link>https://academictorrents.com/details/7ff1031a0652dc89df2e421e96f5db3e329b295f</link>
<description/>
<size>187308</size>
</item><item>
<title>A Probabilistic Analysis of EM for Mixtures of Separated, Spherical Gaussians</title>
<category>Paper</category>
<infohash>add36f9231e0ff84311f823e9ff7149eff456fb2</infohash>
<guid>https://academictorrents.com/details/add36f9231e0ff84311f823e9ff7149eff456fb2</guid>
<link>https://academictorrents.com/details/add36f9231e0ff84311f823e9ff7149eff456fb2</link>
<description/>
<size>192022</size>
</item><item>
<title>A Generalized Maximum Entropy Approach to Bregman Co-clustering and Matrix Approximation</title>
<category>Paper</category>
<infohash>283544b0bfa9937de05b7d11ea6de7798d85bf14</infohash>
<guid>https://academictorrents.com/details/283544b0bfa9937de05b7d11ea6de7798d85bf14</guid>
<link>https://academictorrents.com/details/283544b0bfa9937de05b7d11ea6de7798d85bf14</link>
<description/>
<size>518047</size>
</item><item>
<title>Joint Harmonic Functions and Their Supervised Connections</title>
<category>Paper</category>
<infohash>baa7a77af59e0094b7e8cf7f8b63b712482b2a04</infohash>
<guid>https://academictorrents.com/details/baa7a77af59e0094b7e8cf7f8b63b712482b2a04</guid>
<link>https://academictorrents.com/details/baa7a77af59e0094b7e8cf7f8b63b712482b2a04</link>
<description/>
<size>749025</size>
</item><item>
<title>Undercomplete Blind Subspace Deconvolution</title>
<category>Paper</category>
<infohash>640ed853870355189e3727bbe0ae431844cf69c2</infohash>
<guid>https://academictorrents.com/details/640ed853870355189e3727bbe0ae431844cf69c2</guid>
<link>https://academictorrents.com/details/640ed853870355189e3727bbe0ae431844cf69c2</link>
<description/>
<size>803707</size>
</item><item>
<title>Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes</title>
<category>Paper</category>
<infohash>cca65715cb31b53876c27c0f4ddcd2be9ad7036a</infohash>
<guid>https://academictorrents.com/details/cca65715cb31b53876c27c0f4ddcd2be9ad7036a</guid>
<link>https://academictorrents.com/details/cca65715cb31b53876c27c0f4ddcd2be9ad7036a</link>
<description/>
<size>1343200</size>
</item><item>
<title>Behavioral Shaping for Geometric Concepts</title>
<category>Paper</category>
<infohash>8acaba10076f9619a985ccc393b7b2c004ff9cb3</infohash>
<guid>https://academictorrents.com/details/8acaba10076f9619a985ccc393b7b2c004ff9cb3</guid>
<link>https://academictorrents.com/details/8acaba10076f9619a985ccc393b7b2c004ff9cb3</link>
<description/>
<size>234071</size>
</item><item>
<title>Learnability of Gaussians with Flexible Variances</title>
<category>Paper</category>
<infohash>324053932b051500ed6697786468f0a883adc498</infohash>
<guid>https://academictorrents.com/details/324053932b051500ed6697786468f0a883adc498</guid>
<link>https://academictorrents.com/details/324053932b051500ed6697786468f0a883adc498</link>
<description/>
<size>226296</size>
</item><item>
<title>Euclidean Embedding of Co-occurrence Data</title>
<category>Paper</category>
<infohash>5fcf3e50164ab453a28f487f7e8d5f44ddbe1103</infohash>
<guid>https://academictorrents.com/details/5fcf3e50164ab453a28f487f7e8d5f44ddbe1103</guid>
<link>https://academictorrents.com/details/5fcf3e50164ab453a28f487f7e8d5f44ddbe1103</link>
<description/>
<size>908305</size>
</item><item>
<title>Kernel Methods for Measuring Independence</title>
<category>Paper</category>
<infohash>931d10c8b632338ea2b30f6c749ff934ccf4c7a0</infohash>
<guid>https://academictorrents.com/details/931d10c8b632338ea2b30f6c749ff934ccf4c7a0</guid>
<link>https://academictorrents.com/details/931d10c8b632338ea2b30f6c749ff934ccf4c7a0</link>
<description/>
<size>478952</size>
</item><item>
<title>Generalization Bounds for Ranking Algorithms via Algorithmic Stability</title>
<category>Paper</category>
<infohash>aee818944bd83cd8d3ce374cb5bf9c44f1d21019</infohash>
<guid>https://academictorrents.com/details/aee818944bd83cd8d3ce374cb5bf9c44f1d21019</guid>
<link>https://academictorrents.com/details/aee818944bd83cd8d3ce374cb5bf9c44f1d21019</link>
<description/>
<size>262740</size>
</item><item>
<title>Rearrangement Clustering: Pitfalls, Remedies, and Applications</title>
<category>Paper</category>
<infohash>5e336edd78c873fae1cf334f18fd531a3be8b039</infohash>
<guid>https://academictorrents.com/details/5e336edd78c873fae1cf334f18fd531a3be8b039</guid>
<link>https://academictorrents.com/details/5e336edd78c873fae1cf334f18fd531a3be8b039</link>
<description/>
<size>866519</size>
</item><item>
<title>Harnessing the Expertise of 70,000 Human Editors: Knowledge-Based Feature Generation for Text Categorization</title>
<category>Paper</category>
<infohash>077b92ed26aeecc050c4f268c066563c62e405ce</infohash>
<guid>https://academictorrents.com/details/077b92ed26aeecc050c4f268c066563c62e405ce</guid>
<link>https://academictorrents.com/details/077b92ed26aeecc050c4f268c066563c62e405ce</link>
<description/>
<size>302310</size>
</item><item>
<title>Efficient Online and Batch Learning Using Forward Backward Splitting</title>
<category>Paper</category>
<infohash>73f8e9a0eac100f8ab6409d510f50967c192af34</infohash>
<guid>https://academictorrents.com/details/73f8e9a0eac100f8ab6409d510f50967c192af34</guid>
<link>https://academictorrents.com/details/73f8e9a0eac100f8ab6409d510f50967c192af34</link>
<description/>
<size>514500</size>
</item><item>
<title>Managing Diversity in Regression Ensembles</title>
<category>Paper</category>
<infohash>b6457f6bfce562a8be2919e559d5c11c6acfb75c</infohash>
<guid>https://academictorrents.com/details/b6457f6bfce562a8be2919e559d5c11c6acfb75c</guid>
<link>https://academictorrents.com/details/b6457f6bfce562a8be2919e559d5c11c6acfb75c</link>
<description/>
<size>251188</size>
</item><item>
<title>Ranking the Best Instances</title>
<category>Paper</category>
<infohash>06fed548056eed0d17d12d817dd1e57284da2fe8</infohash>
<guid>https://academictorrents.com/details/06fed548056eed0d17d12d817dd1e57284da2fe8</guid>
<link>https://academictorrents.com/details/06fed548056eed0d17d12d817dd1e57284da2fe8</link>
<description/>
<size>245836</size>
</item><item>
<title>Variational Message Passing</title>
<category>Paper</category>
<infohash>30ce6f474e843e7b65480679a86612b57c72d86c</infohash>
<guid>https://academictorrents.com/details/30ce6f474e843e7b65480679a86612b57c72d86c</guid>
<link>https://academictorrents.com/details/30ce6f474e843e7b65480679a86612b57c72d86c</link>
<description/>
<size>456228</size>
</item><item>
<title>Learning Halfspaces with Malicious Noise</title>
<category>Paper</category>
<infohash>0bb3abb4a388d52ef2c5b4f1fb86a7785ccce0c2</infohash>
<guid>https://academictorrents.com/details/0bb3abb4a388d52ef2c5b4f1fb86a7785ccce0c2</guid>
<link>https://academictorrents.com/details/0bb3abb4a388d52ef2c5b4f1fb86a7785ccce0c2</link>
<description/>
<size>193753</size>
</item><item>
<title>Tree-Based Batch Mode Reinforcement Learning</title>
<category>Paper</category>
<infohash>b81244d60c7d9c0fb203ac3190e8d3582b8f37ba</infohash>
<guid>https://academictorrents.com/details/b81244d60c7d9c0fb203ac3190e8d3582b8f37ba</guid>
<link>https://academictorrents.com/details/b81244d60c7d9c0fb203ac3190e8d3582b8f37ba</link>
<description/>
<size>1321891</size>
</item><item>
<title>Penalized Model-Based Clustering with Application to Variable Selection</title>
<category>Paper</category>
<infohash>2e24eccfdef28d63b32ac448840980f79436776b</infohash>
<guid>https://academictorrents.com/details/2e24eccfdef28d63b32ac448840980f79436776b</guid>
<link>https://academictorrents.com/details/2e24eccfdef28d63b32ac448840980f79436776b</link>
<description/>
<size>127934</size>
</item><item>
<title>Loop Corrections for Approximate Inference on Factor Graphs</title>
<category>Paper</category>
<infohash>b68dfac9d5f26fb7f89bc0915f5b162bc97a706a</infohash>
<guid>https://academictorrents.com/details/b68dfac9d5f26fb7f89bc0915f5b162bc97a706a</guid>
<link>https://academictorrents.com/details/b68dfac9d5f26fb7f89bc0915f5b162bc97a706a</link>
<description/>
<size>781056</size>
</item><item>
<title>Learning Module Networks</title>
<category>Paper</category>
<infohash>238b93637b9599448ee54d04127f17099a8d573f</infohash>
<guid>https://academictorrents.com/details/238b93637b9599448ee54d04127f17099a8d573f</guid>
<link>https://academictorrents.com/details/238b93637b9599448ee54d04127f17099a8d573f</link>
<description/>
<size>406702</size>
</item><item>
<title>A Bayesian Model for Supervised Clustering with the Dirichlet Process Prior</title>
<category>Paper</category>
<infohash>450161730f8fe0c2950c446ea749e4c85c1b2538</infohash>
<guid>https://academictorrents.com/details/450161730f8fe0c2950c446ea749e4c85c1b2538</guid>
<link>https://academictorrents.com/details/450161730f8fe0c2950c446ea749e4c85c1b2538</link>
<description/>
<size>227714</size>
</item><item>
<title>Learning When Concepts Abound</title>
<category>Paper</category>
<infohash>f1e5b72c61f16a4944cb9093db694ae63bf9ce4b</infohash>
<guid>https://academictorrents.com/details/f1e5b72c61f16a4944cb9093db694ae63bf9ce4b</guid>
<link>https://academictorrents.com/details/f1e5b72c61f16a4944cb9093db694ae63bf9ce4b</link>
<description/>
<size>351750</size>
</item><item>
<title>Cautious Collective Classification</title>
<category>Paper</category>
<infohash>8a59a0bcd00c453594f7b284cf929bc1fbf63f5c</infohash>
<guid>https://academictorrents.com/details/8a59a0bcd00c453594f7b284cf929bc1fbf63f5c</guid>
<link>https://academictorrents.com/details/8a59a0bcd00c453594f7b284cf929bc1fbf63f5c</link>
<description/>
<size>525676</size>
</item><item>
<title>Belief Propagation for Continuous State Spaces: Stochastic Message-Passing with Quantitative Guarantees</title>
<category>Paper</category>
<infohash>deffa88321d73009aa74fd5f5d958adf147e92cd</infohash>
<guid>https://academictorrents.com/details/deffa88321d73009aa74fd5f5d958adf147e92cd</guid>
<link>https://academictorrents.com/details/deffa88321d73009aa74fd5f5d958adf147e92cd</link>
<description/>
<size>519895</size>
</item><item>
<title>Smooth -Insensitive Regression by Loss Symmetrization</title>
<category>Paper</category>
<infohash>f7a3961beba097d7af71accdb1da74a7eb64095e</infohash>
<guid>https://academictorrents.com/details/f7a3961beba097d7af71accdb1da74a7eb64095e</guid>
<link>https://academictorrents.com/details/f7a3961beba097d7af71accdb1da74a7eb64095e</link>
<description/>
<size>416453</size>
</item><item>
<title>Noise Tolerant Variants of the Perceptron Algorithm</title>
<category>Paper</category>
<infohash>15ce9dd60de4798bab0b6a20d538bf14d05bcf76</infohash>
<guid>https://academictorrents.com/details/15ce9dd60de4798bab0b6a20d538bf14d05bcf76</guid>
<link>https://academictorrents.com/details/15ce9dd60de4798bab0b6a20d538bf14d05bcf76</link>
<description/>
<size>318935</size>
</item><item>
<title>Refinement of Reproducing Kernels</title>
<category>Paper</category>
<infohash>e754ca814cb6c74b06f4c3e409247f88d077aab4</infohash>
<guid>https://academictorrents.com/details/e754ca814cb6c74b06f4c3e409247f88d077aab4</guid>
<link>https://academictorrents.com/details/e754ca814cb6c74b06f4c3e409247f88d077aab4</link>
<description/>
<size>239001</size>
</item><item>
<title>Assessing Approximate Inference for Binary Gaussian Process Classification</title>
<category>Paper</category>
<infohash>7c786a0cd4404f1266ef86e6a3523155755b0064</infohash>
<guid>https://academictorrents.com/details/7c786a0cd4404f1266ef86e6a3523155755b0064</guid>
<link>https://academictorrents.com/details/7c786a0cd4404f1266ef86e6a3523155755b0064</link>
<description/>
<size>483692</size>
</item><item>
<title>Learning Hidden Variable Networks: The Information Bottleneck Approach</title>
<category>Paper</category>
<infohash>f96a575be7a1439c66f0a5d5b06442275493f346</infohash>
<guid>https://academictorrents.com/details/f96a575be7a1439c66f0a5d5b06442275493f346</guid>
<link>https://academictorrents.com/details/f96a575be7a1439c66f0a5d5b06442275493f346</link>
<description/>
<size>661619</size>
</item><item>
<title>Consistent Feature Selection for Pattern Recognition in Polynomial Time</title>
<category>Paper</category>
<infohash>d8c183b2d6c62e5ab43b980a9cec6be2827b5470</infohash>
<guid>https://academictorrents.com/details/d8c183b2d6c62e5ab43b980a9cec6be2827b5470</guid>
<link>https://academictorrents.com/details/d8c183b2d6c62e5ab43b980a9cec6be2827b5470</link>
<description/>
<size>362456</size>
</item><item>
<title>Nonextensive Information Theoretic Kernels on Measures</title>
<category>Paper</category>
<infohash>030a9f9b7403eb989205271685a12474c2ba5b26</infohash>
<guid>https://academictorrents.com/details/030a9f9b7403eb989205271685a12474c2ba5b26</guid>
<link>https://academictorrents.com/details/030a9f9b7403eb989205271685a12474c2ba5b26</link>
<description/>
<size>377540</size>
</item><item>
<title>On the Nystrm Method for Approximating a Gram Matrix for Improved Kernel-Based Learning</title>
<category>Paper</category>
<infohash>b3e686c5b8f75085df7dde5f07d03bdcaa720264</infohash>
<guid>https://academictorrents.com/details/b3e686c5b8f75085df7dde5f07d03bdcaa720264</guid>
<link>https://academictorrents.com/details/b3e686c5b8f75085df7dde5f07d03bdcaa720264</link>
<description/>
<size>174960</size>
</item><item>
<title>Learning in Environments with Unknown Dynamics: Towards more Robust Concept Learners</title>
<category>Paper</category>
<infohash>03d75738d3c2a48e92843292acd94aebb60b6711</infohash>
<guid>https://academictorrents.com/details/03d75738d3c2a48e92843292acd94aebb60b6711</guid>
<link>https://academictorrents.com/details/03d75738d3c2a48e92843292acd94aebb60b6711</link>
<description/>
<size>1111101</size>
</item><item>
<title>Transfer Learning for Reinforcement Learning Domains: A Survey</title>
<category>Paper</category>
<infohash>062f6d7baad93e2bb253a751e4e13f5ee4310fbb</infohash>
<guid>https://academictorrents.com/details/062f6d7baad93e2bb253a751e4e13f5ee4310fbb</guid>
<link>https://academictorrents.com/details/062f6d7baad93e2bb253a751e4e13f5ee4310fbb</link>
<description/>
<size>409391</size>
</item><item>
<title>Multiclass Classification with Multi-Prototype Support Vector Machines</title>
<category>Paper</category>
<infohash>96dd2a85db288e33bd9369d0a5b255c0981ba22e</infohash>
<guid>https://academictorrents.com/details/96dd2a85db288e33bd9369d0a5b255c0981ba22e</guid>
<link>https://academictorrents.com/details/96dd2a85db288e33bd9369d0a5b255c0981ba22e</link>
<description/>
<size>375435</size>
</item><item>
<title>Revised Loss Bounds for the Set Covering Machine and Sample-Compression Loss Bounds for Imbalanced Data</title>
<category>Paper</category>
<infohash>26490a33e07013d27b8cb1b0ff2cf6b899d82b5c</infohash>
<guid>https://academictorrents.com/details/26490a33e07013d27b8cb1b0ff2cf6b899d82b5c</guid>
<link>https://academictorrents.com/details/26490a33e07013d27b8cb1b0ff2cf6b899d82b5c</link>
<description/>
<size>332072</size>
</item><item>
<title>Improving CUR Matrix Decomposition and the Nystrom Approximation via Adaptive Sampling</title>
<category>Paper</category>
<infohash>c9e45bd1ca311546e2695e614da5ba3f07b9c24f</infohash>
<guid>https://academictorrents.com/details/c9e45bd1ca311546e2695e614da5ba3f07b9c24f</guid>
<link>https://academictorrents.com/details/c9e45bd1ca311546e2695e614da5ba3f07b9c24f</link>
<description/>
<size>691673</size>
</item><item>
<title>On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning</title>
<category>Paper</category>
<infohash>30d522121723e2032764a9a2db95108fda7ed637</infohash>
<guid>https://academictorrents.com/details/30d522121723e2032764a9a2db95108fda7ed637</guid>
<link>https://academictorrents.com/details/30d522121723e2032764a9a2db95108fda7ed637</link>
<description/>
<size>287253</size>
</item><item>
<title>Hash Kernels for Structured Data</title>
<category>Paper</category>
<infohash>faf29bb0ba94e087f1a8ad8dfdfe51a7f078e207</infohash>
<guid>https://academictorrents.com/details/faf29bb0ba94e087f1a8ad8dfdfe51a7f078e207</guid>
<link>https://academictorrents.com/details/faf29bb0ba94e087f1a8ad8dfdfe51a7f078e207</link>
<description/>
<size>260296</size>
</item><item>
<title>Prioritization Methods for Accelerating MDP Solvers</title>
<category>Paper</category>
<infohash>513be53da526050b2f4fbcded68e6fe284c3fcec</infohash>
<guid>https://academictorrents.com/details/513be53da526050b2f4fbcded68e6fe284c3fcec</guid>
<link>https://academictorrents.com/details/513be53da526050b2f4fbcded68e6fe284c3fcec</link>
<description/>
<size>542572</size>
</item><item>
<title>Classification with Gaussians and Convex Loss</title>
<category>Paper</category>
<infohash>fa6ea4ede21e0185c73f7f6e352c68fb6874a4a7</infohash>
<guid>https://academictorrents.com/details/fa6ea4ede21e0185c73f7f6e352c68fb6874a4a7</guid>
<link>https://academictorrents.com/details/fa6ea4ede21e0185c73f7f6e352c68fb6874a4a7</link>
<description/>
<size>188003</size>
</item><item>
<title>A Unifying View of Sparse Approximate Gaussian Process Regression</title>
<category>Paper</category>
<infohash>963e38f4120b760c06855adc29cada57bc9c3086</infohash>
<guid>https://academictorrents.com/details/963e38f4120b760c06855adc29cada57bc9c3086</guid>
<link>https://academictorrents.com/details/963e38f4120b760c06855adc29cada57bc9c3086</link>
<description/>
<size>254625</size>
</item><item>
<title>Stability of Randomized Learning Algorithms</title>
<category>Paper</category>
<infohash>18cf38dccb2548a5213233c1c11e3ae4438d4ca3</infohash>
<guid>https://academictorrents.com/details/18cf38dccb2548a5213233c1c11e3ae4438d4ca3</guid>
<link>https://academictorrents.com/details/18cf38dccb2548a5213233c1c11e3ae4438d4ca3</link>
<description/>
<size>177757</size>
</item><item>
<title>Learning from Examples as an Inverse Problem</title>
<category>Paper</category>
<infohash>18d7de9c446474fba3abd5fa46f050d3616cc5e1</infohash>
<guid>https://academictorrents.com/details/18d7de9c446474fba3abd5fa46f050d3616cc5e1</guid>
<link>https://academictorrents.com/details/18d7de9c446474fba3abd5fa46f050d3616cc5e1</link>
<description/>
<size>149249</size>
</item><item>
<title>Large Margin Methods for Structured and Interdependent Output Variables</title>
<category>Paper</category>
<infohash>993fe72cb0a9a730e0ae5ec863f4a346142399e2</infohash>
<guid>https://academictorrents.com/details/993fe72cb0a9a730e0ae5ec863f4a346142399e2</guid>
<link>https://academictorrents.com/details/993fe72cb0a9a730e0ae5ec863f4a346142399e2</link>
<description/>
<size>240666</size>
</item><item>
<title>Comments on the "Core Vector Machines: Fast SVM Training on Very Large Data Sets"</title>
<category>Paper</category>
<infohash>62e9371183b2b5d0df2bf79113fd91740d9dcbfd</infohash>
<guid>https://academictorrents.com/details/62e9371183b2b5d0df2bf79113fd91740d9dcbfd</guid>
<link>https://academictorrents.com/details/62e9371183b2b5d0df2bf79113fd91740d9dcbfd</link>
<description/>
<size>131089</size>
</item><item>
<title>An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs</title>
<category>Paper</category>
<infohash>1ec2b027687067e295a2b1a03d611cf4d0c2f37e</infohash>
<guid>https://academictorrents.com/details/1ec2b027687067e295a2b1a03d611cf4d0c2f37e</guid>
<link>https://academictorrents.com/details/1ec2b027687067e295a2b1a03d611cf4d0c2f37e</link>
<description/>
<size>827673</size>
</item><item>
<title>Classification in Networked Data: A Toolkit and a Univariate Case Study</title>
<category>Paper</category>
<infohash>c59e123ab69fe2b2002f3af9a2ba3c02963293ea</infohash>
<guid>https://academictorrents.com/details/c59e123ab69fe2b2002f3af9a2ba3c02963293ea</guid>
<link>https://academictorrents.com/details/c59e123ab69fe2b2002f3af9a2ba3c02963293ea</link>
<description/>
<size>452337</size>
</item><item>
<title>Gaussian Processes for Ordinal Regression</title>
<category>Paper</category>
<infohash>74e4922391a20e797c0e38949968f0329d7e396c</infohash>
<guid>https://academictorrents.com/details/74e4922391a20e797c0e38949968f0329d7e396c</guid>
<link>https://academictorrents.com/details/74e4922391a20e797c0e38949968f0329d7e396c</link>
<description/>
<size>269257</size>
</item><item>
<title>What's Strange About Recent Events (WSARE): An Algorithm for the Early Detection of Disease Outbreaks</title>
<category>Paper</category>
<infohash>5a52cfe03369a59117da5ee5c9660c1c8e1a19f1</infohash>
<guid>https://academictorrents.com/details/5a52cfe03369a59117da5ee5c9660c1c8e1a19f1</guid>
<link>https://academictorrents.com/details/5a52cfe03369a59117da5ee5c9660c1c8e1a19f1</link>
<description/>
<size>276211</size>
</item><item>
<title>A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data</title>
<category>Paper</category>
<infohash>f4470eb8bc3a6f697df61bde319fd56e3a9d6733</infohash>
<guid>https://academictorrents.com/details/f4470eb8bc3a6f697df61bde319fd56e3a9d6733</guid>
<link>https://academictorrents.com/details/f4470eb8bc3a6f697df61bde319fd56e3a9d6733</link>
<description/>
<size>401640</size>
</item><item>
<title>An MDP-Based Recommender System</title>
<category>Paper</category>
<infohash>c081d6f2e2177694c10aa6d77b0f569d718aa277</infohash>
<guid>https://academictorrents.com/details/c081d6f2e2177694c10aa6d77b0f569d718aa277</guid>
<link>https://academictorrents.com/details/c081d6f2e2177694c10aa6d77b0f569d718aa277</link>
<description/>
<size>569219</size>
</item><item>
<title>Estimation of Sparse Binary Pairwise Markov Networks using Pseudo-likelihoods</title>
<category>Paper</category>
<infohash>821b0dafee4af63753f358d0e28ade1c0dbaba07</infohash>
<guid>https://academictorrents.com/details/821b0dafee4af63753f358d0e28ade1c0dbaba07</guid>
<link>https://academictorrents.com/details/821b0dafee4af63753f358d0e28ade1c0dbaba07</link>
<description/>
<size>243920</size>
</item><item>
<title>Nonlinear Boosting Projections for Ensemble Construction</title>
<category>Paper</category>
<infohash>26f0ee842a3ac66f7e1a75d9585a06872284150b</infohash>
<guid>https://academictorrents.com/details/26f0ee842a3ac66f7e1a75d9585a06872284150b</guid>
<link>https://academictorrents.com/details/26f0ee842a3ac66f7e1a75d9585a06872284150b</link>
<description/>
<size>10517989</size>
</item><item>
<title>Scalable Collaborative Filtering Approaches for Large Recommender Systems(Special Topic on Mining and Learning with Graphs and Relations)</title>
<category>Paper</category>
<infohash>c9a25c873abe71f40c1fc3719f476fe1d4386c76</infohash>
<guid>https://academictorrents.com/details/c9a25c873abe71f40c1fc3719f476fe1d4386c76</guid>
<link>https://academictorrents.com/details/c9a25c873abe71f40c1fc3719f476fe1d4386c76</link>
<description/>
<size>335410</size>
</item><item>
<title>GPstuff: Bayesian Modeling with Gaussian Processes</title>
<category>Paper</category>
<infohash>5c019c7cab9d0872cb5bcdd8d0ce2e44e8c12ef6</infohash>
<guid>https://academictorrents.com/details/5c019c7cab9d0872cb5bcdd8d0ce2e44e8c12ef6</guid>
<link>https://academictorrents.com/details/5c019c7cab9d0872cb5bcdd8d0ce2e44e8c12ef6</link>
<description/>
<size>74117</size>
</item><item>
<title>Beyond Fano's Inequality: Bounds on the Optimal F-Score, BER, and Cost-Sensitive Risk and Their Implications</title>
<category>Paper</category>
<infohash>3f4351f654434c014ffa7bd11b3bec535848fc15</infohash>
<guid>https://academictorrents.com/details/3f4351f654434c014ffa7bd11b3bec535848fc15</guid>
<link>https://academictorrents.com/details/3f4351f654434c014ffa7bd11b3bec535848fc15</link>
<description/>
<size>542557</size>
</item><item>
<title>Consistency and Localizability</title>
<category>Paper</category>
<infohash>c4d2545b047bdf64371c87bc26833b977c1fe8d8</infohash>
<guid>https://academictorrents.com/details/c4d2545b047bdf64371c87bc26833b977c1fe8d8</guid>
<link>https://academictorrents.com/details/c4d2545b047bdf64371c87bc26833b977c1fe8d8</link>
<description/>
<size>223506</size>
</item><item>
<title>RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>c08bf14da29f4bf06dd18a4d530910390181e330</infohash>
<guid>https://academictorrents.com/details/c08bf14da29f4bf06dd18a4d530910390181e330</guid>
<link>https://academictorrents.com/details/c08bf14da29f4bf06dd18a4d530910390181e330</link>
<description/>
<size>50879</size>
</item><item>
<title>Multiclass Boosting for Weak Classifiers</title>
<category>Paper</category>
<infohash>a92a8bf349ba69781681b3624b2653bdc5ec4398</infohash>
<guid>https://academictorrents.com/details/a92a8bf349ba69781681b3624b2653bdc5ec4398</guid>
<link>https://academictorrents.com/details/a92a8bf349ba69781681b3624b2653bdc5ec4398</link>
<description/>
<size>194468</size>
</item><item>
<title>Application of Non Parametric Empirical Bayes Estimation to High Dimensional Classification</title>
<category>Paper</category>
<infohash>9e968a69072c4814b9e3bc4736619f0713d38c67</infohash>
<guid>https://academictorrents.com/details/9e968a69072c4814b9e3bc4736619f0713d38c67</guid>
<link>https://academictorrents.com/details/9e968a69072c4814b9e3bc4736619f0713d38c67</link>
<description/>
<size>140389</size>
</item><item>
<title>When Is There a Representer Theorem? Vector Versus Matrix Regularizers</title>
<category>Paper</category>
<infohash>27b9e9808ffb19e126acc7ce1746d81fd81030f3</infohash>
<guid>https://academictorrents.com/details/27b9e9808ffb19e126acc7ce1746d81fd81030f3</guid>
<link>https://academictorrents.com/details/27b9e9808ffb19e126acc7ce1746d81fd81030f3</link>
<description/>
<size>196765</size>
</item><item>
<title>Active Learning to Recognize Multiple Types of Plankton</title>
<category>Paper</category>
<infohash>7d532a9068cf1009a49d40ff7595b57703bbf08e</infohash>
<guid>https://academictorrents.com/details/7d532a9068cf1009a49d40ff7595b57703bbf08e</guid>
<link>https://academictorrents.com/details/7d532a9068cf1009a49d40ff7595b57703bbf08e</link>
<description/>
<size>216981</size>
</item><item>
<title>Learning the Kernel with Hyperkernels (Kernel Machines Section)</title>
<category>Paper</category>
<infohash>4ab608c98f874ab10ec1b77af3b5ca83c85d2a51</infohash>
<guid>https://academictorrents.com/details/4ab608c98f874ab10ec1b77af3b5ca83c85d2a51</guid>
<link>https://academictorrents.com/details/4ab608c98f874ab10ec1b77af3b5ca83c85d2a51</link>
<description/>
<size>434770</size>
</item><item>
<title>Using Local Dependencies within Batches to Improve Large Margin Classifiers</title>
<category>Paper</category>
<infohash>754104894f25f92ea726ac29a6e34d460bd61f5f</infohash>
<guid>https://academictorrents.com/details/754104894f25f92ea726ac29a6e34d460bd61f5f</guid>
<link>https://academictorrents.com/details/754104894f25f92ea726ac29a6e34d460bd61f5f</link>
<description/>
<size>268976</size>
</item><item>
<title>Information Bottleneck for Gaussian Variables</title>
<category>Paper</category>
<infohash>d3cecd5b9df1147c2f62c43bcfb4f1d66f213a9f</infohash>
<guid>https://academictorrents.com/details/d3cecd5b9df1147c2f62c43bcfb4f1d66f213a9f</guid>
<link>https://academictorrents.com/details/d3cecd5b9df1147c2f62c43bcfb4f1d66f213a9f</link>
<description/>
<size>518384</size>
</item><item>
<title>Machine Learning Methods for Predicting Failures in Hard Drives: A Multiple-Instance Application</title>
<category>Paper</category>
<infohash>fd1725daad46d01243c91e36fa9226ef6f1e374c</infohash>
<guid>https://academictorrents.com/details/fd1725daad46d01243c91e36fa9226ef6f1e374c</guid>
<link>https://academictorrents.com/details/fd1725daad46d01243c91e36fa9226ef6f1e374c</link>
<description/>
<size>274510</size>
</item><item>
<title>Learning Multiple Tasks with Kernel Methods</title>
<category>Paper</category>
<infohash>6e30834fce44ef979e8c369ce942760284c550be</infohash>
<guid>https://academictorrents.com/details/6e30834fce44ef979e8c369ce942760284c550be</guid>
<link>https://academictorrents.com/details/6e30834fce44ef979e8c369ce942760284c550be</link>
<description/>
<size>164094</size>
</item><item>
<title>Java-ML: A Machine Learning Library(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>51f1fcb31e6a52e3ef2e30d60a9c7c03b82d8b3f</infohash>
<guid>https://academictorrents.com/details/51f1fcb31e6a52e3ef2e30d60a9c7c03b82d8b3f</guid>
<link>https://academictorrents.com/details/51f1fcb31e6a52e3ef2e30d60a9c7c03b82d8b3f</link>
<description/>
<size>28916</size>
</item><item>
<title>Ultrahigh Dimensional Feature Selection: Beyond The Linear Model</title>
<category>Paper</category>
<infohash>58d5bf59fec3abe12ec451279938a4bc71ad6bfa</infohash>
<guid>https://academictorrents.com/details/58d5bf59fec3abe12ec451279938a4bc71ad6bfa</guid>
<link>https://academictorrents.com/details/58d5bf59fec3abe12ec451279938a4bc71ad6bfa</link>
<description/>
<size>195894</size>
</item><item>
<title>MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use</title>
<category>Paper</category>
<infohash>19f00a21ce64d05278c5ecee12b6d0c1c2bd4b84</infohash>
<guid>https://academictorrents.com/details/19f00a21ce64d05278c5ecee12b6d0c1c2bd4b84</guid>
<link>https://academictorrents.com/details/19f00a21ce64d05278c5ecee12b6d0c1c2bd4b84</link>
<description/>
<size>2044339</size>
</item><item>
<title>An Algorithm for Reading Dependencies from the Minimal Undirected Independence Map of a Graphoid that Satisfies Weak Transitivity</title>
<category>Paper</category>
<infohash>519e15dbeab4f1c9fb79c7699f75db6089c6b6bf</infohash>
<guid>https://academictorrents.com/details/519e15dbeab4f1c9fb79c7699f75db6089c6b6bf</guid>
<link>https://academictorrents.com/details/519e15dbeab4f1c9fb79c7699f75db6089c6b6bf</link>
<description/>
<size>190689</size>
</item><item>
<title>Denoising Source Separation</title>
<category>Paper</category>
<infohash>510729a50981a497de66ac7cec570eb08a6b7c79</infohash>
<guid>https://academictorrents.com/details/510729a50981a497de66ac7cec570eb08a6b7c79</guid>
<link>https://academictorrents.com/details/510729a50981a497de66ac7cec570eb08a6b7c79</link>
<description/>
<size>1950865</size>
</item><item>
<title>Expectation Consistent Approximate Inference</title>
<category>Paper</category>
<infohash>3c48b34e2cfd803c83257a6477ac0ab0ef49fa12</infohash>
<guid>https://academictorrents.com/details/3c48b34e2cfd803c83257a6477ac0ab0ef49fa12</guid>
<link>https://academictorrents.com/details/3c48b34e2cfd803c83257a6477ac0ab0ef49fa12</link>
<description/>
<size>284259</size>
</item><item>
<title>Sparse Online Learning via Truncated Gradient</title>
<category>Paper</category>
<infohash>4dc4963adefc2e475287cf84eafe511d01a35d2b</infohash>
<guid>https://academictorrents.com/details/4dc4963adefc2e475287cf84eafe511d01a35d2b</guid>
<link>https://academictorrents.com/details/4dc4963adefc2e475287cf84eafe511d01a35d2b</link>
<description/>
<size>287281</size>
</item><item>
<title>Working Set Selection Using Second Order Information for Training Support Vector Machines</title>
<category>Paper</category>
<infohash>8423422eb10dd6a9fbe8a6e1fdb6380913d6bd84</infohash>
<guid>https://academictorrents.com/details/8423422eb10dd6a9fbe8a6e1fdb6380913d6bd84</guid>
<link>https://academictorrents.com/details/8423422eb10dd6a9fbe8a6e1fdb6380913d6bd84</link>
<description/>
<size>440141</size>
</item><item>
<title>Controlling the False Discovery Rate of the AssociationCausality Structure Learned with the PC Algorithm(Special Topic on Mining and Learning with Graphs and Relations)</title>
<category>Paper</category>
<infohash>c621ab96861aed981ed53a1bb39bdd86c6d95467</infohash>
<guid>https://academictorrents.com/details/c621ab96861aed981ed53a1bb39bdd86c6d95467</guid>
<link>https://academictorrents.com/details/c621ab96861aed981ed53a1bb39bdd86c6d95467</link>
<description/>
<size>469197</size>
</item><item>
<title>Frames, Reproducing Kernels, Regularization and Learning</title>
<category>Paper</category>
<infohash>f33ce848cd9ff297cf94dee7a196b7b3a6141254</infohash>
<guid>https://academictorrents.com/details/f33ce848cd9ff297cf94dee7a196b7b3a6141254</guid>
<link>https://academictorrents.com/details/f33ce848cd9ff297cf94dee7a196b7b3a6141254</link>
<description/>
<size>338347</size>
</item><item>
<title>Adaptive False Discovery Rate Control under Independence and Dependence</title>
<category>Paper</category>
<infohash>a6505f1745d9214d0fbc5bf66f7b790d8e8cc363</infohash>
<guid>https://academictorrents.com/details/a6505f1745d9214d0fbc5bf66f7b790d8e8cc363</guid>
<link>https://academictorrents.com/details/a6505f1745d9214d0fbc5bf66f7b790d8e8cc363</link>
<description/>
<size>364223</size>
</item><item>
<title>Learning Linear Ranking Functions for Beam Search with Application to Planning</title>
<category>Paper</category>
<infohash>8c2e77b844696e7c0960ea4dd415192de1afc31e</infohash>
<guid>https://academictorrents.com/details/8c2e77b844696e7c0960ea4dd415192de1afc31e</guid>
<link>https://academictorrents.com/details/8c2e77b844696e7c0960ea4dd415192de1afc31e</link>
<description/>
<size>354721</size>
</item><item>
<title>SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent</title>
<category>Paper</category>
<infohash>b34015056bd14ce7c9ef67cd8f4e209a8c7dc697</infohash>
<guid>https://academictorrents.com/details/b34015056bd14ce7c9ef67cd8f4e209a8c7dc697</guid>
<link>https://academictorrents.com/details/b34015056bd14ce7c9ef67cd8f4e209a8c7dc697</link>
<description/>
<size>151450</size>
</item><item>
<title>Bayesian Network Structure Learning by Recursive Autonomy Identification</title>
<category>Paper</category>
<infohash>148196cd681145324b6e1ac5b92572ea388e1827</infohash>
<guid>https://academictorrents.com/details/148196cd681145324b6e1ac5b92572ea388e1827</guid>
<link>https://academictorrents.com/details/148196cd681145324b6e1ac5b92572ea388e1827</link>
<description/>
<size>532303</size>
</item><item>
<title>Tapkee: An Efficient Dimension Reduction Library</title>
<category>Paper</category>
<infohash>ec09477ac7af2e3bb135819d83619f1a4fcc0c37</infohash>
<guid>https://academictorrents.com/details/ec09477ac7af2e3bb135819d83619f1a4fcc0c37</guid>
<link>https://academictorrents.com/details/ec09477ac7af2e3bb135819d83619f1a4fcc0c37</link>
<description/>
<size>598465</size>
</item><item>
<title>Counterfactual Reasoning and Learning Systems: The Example of Computational Advertising</title>
<category>Paper</category>
<infohash>76b9a4b27cd4f3436b349e438be1c1663abeebea</infohash>
<guid>https://academictorrents.com/details/76b9a4b27cd4f3436b349e438be1c1663abeebea</guid>
<link>https://academictorrents.com/details/76b9a4b27cd4f3436b349e438be1c1663abeebea</link>
<description/>
<size>1394801</size>
</item><item>
<title>Python Environment for Bayesian Learning: Inferring the Structure of Bayesian Networks from Knowledge and Data(Machine Learning Open Source Software Paper)</title>
<category>Paper</category>
<infohash>e684c0edea6d7ec83fb16980bdcb7e502adef004</infohash>
<guid>https://academictorrents.com/details/e684c0edea6d7ec83fb16980bdcb7e502adef004</guid>
<link>https://academictorrents.com/details/e684c0edea6d7ec83fb16980bdcb7e502adef004</link>
<description/>
<size>37216</size>
</item><item>
<title>Universal Kernel-Based Learning with Applications to Regular Languages(Special Topic on Mining and Learning with Graphs and Relations)</title>
<category>Paper</category>
<infohash>f7268a3b3b5c240236c163bda5d0bbcd6388a94a</infohash>
<guid>https://academictorrents.com/details/f7268a3b3b5c240236c163bda5d0bbcd6388a94a</guid>
<link>https://academictorrents.com/details/f7268a3b3b5c240236c163bda5d0bbcd6388a94a</link>
<description/>
<size>282087</size>
</item><item>
<title>Identification of Recurrent Neural Networks by Bayesian Interrogation Techniques</title>
<category>Paper</category>
<infohash>f620441db22049530cc97763ccc9bddd5769d61b</infohash>
<guid>https://academictorrents.com/details/f620441db22049530cc97763ccc9bddd5769d61b</guid>
<link>https://academictorrents.com/details/f620441db22049530cc97763ccc9bddd5769d61b</link>
<description/>
<size>878323</size>
</item><item>
<title>Low-Rank Kernel Learning with Bregman Matrix Divergences</title>
<category>Paper</category>
<infohash>36fbeaf6bdd3f6743e3b9701d7dc4761275c4f96</infohash>
<guid>https://academictorrents.com/details/36fbeaf6bdd3f6743e3b9701d7dc4761275c4f96</guid>
<link>https://academictorrents.com/details/36fbeaf6bdd3f6743e3b9701d7dc4761275c4f96</link>
<description/>
<size>390237</size>
</item><item>
<title>Learning Acyclic Probabilistic Circuits Using Test Paths</title>
<category>Paper</category>
<infohash>7791936dbe592270bf422723b26427053790de17</infohash>
<guid>https://academictorrents.com/details/7791936dbe592270bf422723b26427053790de17</guid>
<link>https://academictorrents.com/details/7791936dbe592270bf422723b26427053790de17</link>
<description/>
<size>261886</size>
</item><item>
<title>Settable Systems: An Extension of Pearl's Causal Model with Optimization, Equilibrium, and Learning</title>
<category>Paper</category>
<infohash>214a7984eddbaf4f80683d2d3a25f7f937dfa209</infohash>
<guid>https://academictorrents.com/details/214a7984eddbaf4f80683d2d3a25f7f937dfa209</guid>
<link>https://academictorrents.com/details/214a7984eddbaf4f80683d2d3a25f7f937dfa209</link>
<description/>
<size>377573</size>
</item><item>
<title>Learning Approximate Sequential Patterns for Classification</title>
<category>Paper</category>
<infohash>c0ddd60f3b29fb6c6dad16bfe87b2c4699d42c1b</infohash>
<guid>https://academictorrents.com/details/c0ddd60f3b29fb6c6dad16bfe87b2c4699d42c1b</guid>
<link>https://academictorrents.com/details/c0ddd60f3b29fb6c6dad16bfe87b2c4699d42c1b</link>
<description/>
<size>199948</size>
</item><item>
<title>Combining Information Extraction Systems Using Voting and Stacked Generalization</title>
<category>Paper</category>
<infohash>7f914e88ed18d63b49f0cf8cf4878cb6d2b7b348</infohash>
<guid>https://academictorrents.com/details/7f914e88ed18d63b49f0cf8cf4878cb6d2b7b348</guid>
<link>https://academictorrents.com/details/7f914e88ed18d63b49f0cf8cf4878cb6d2b7b348</link>
<description/>
<size>407661</size>
</item><item>
<title>Variational Algorithms for Marginal MAP</title>
<category>Paper</category>
<infohash>ef3fd853ccf3a0c8331ab9908d3e4befabf5ef80</infohash>
<guid>https://academictorrents.com/details/ef3fd853ccf3a0c8331ab9908d3e4befabf5ef80</guid>
<link>https://academictorrents.com/details/ef3fd853ccf3a0c8331ab9908d3e4befabf5ef80</link>
<description/>
<size>665799</size>
</item><item>
<title>Distributed Algorithms for Topic Models</title>
<category>Paper</category>
<infohash>6a4b9d2eecc190f85a80adffe68ba52da77af94e</infohash>
<guid>https://academictorrents.com/details/6a4b9d2eecc190f85a80adffe68ba52da77af94e</guid>
<link>https://academictorrents.com/details/6a4b9d2eecc190f85a80adffe68ba52da77af94e</link>
<description/>
<size>332885</size>
</item><item>
<title>Parallel Vector Field Embedding</title>
<category>Paper</category>
<infohash>e4a5f0b14499f2660ccfeb92ab3af65557fc2067</infohash>
<guid>https://academictorrents.com/details/e4a5f0b14499f2660ccfeb92ab3af65557fc2067</guid>
<link>https://academictorrents.com/details/e4a5f0b14499f2660ccfeb92ab3af65557fc2067</link>
<description/>
<size>4100920</size>
</item><item>
<title>Multi-task Reinforcement Learning in Partially Observable Stochastic Environments</title>
<category>Paper</category>
<infohash>902bb50e342e15cafa431ff8c09f2df85b168c0f</infohash>
<guid>https://academictorrents.com/details/902bb50e342e15cafa431ff8c09f2df85b168c0f</guid>
<link>https://academictorrents.com/details/902bb50e342e15cafa431ff8c09f2df85b168c0f</link>
<description/>
<size>670454</size>
</item><item>
<title>How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis</title>
<category>Paper</category>
<infohash>6588c6a4ac6f3af9467374146a433154c82a6ba1</infohash>
<guid>https://academictorrents.com/details/6588c6a4ac6f3af9467374146a433154c82a6ba1</guid>
<link>https://academictorrents.com/details/6588c6a4ac6f3af9467374146a433154c82a6ba1</link>
<description/>
<size>1190557</size>
</item><item>
<title>Similarity-based Classification: Concepts and Algorithms</title>
<category>Paper</category>
<infohash>bcdd07948277fb3b20b1727ef2104107a05995eb</infohash>
<guid>https://academictorrents.com/details/bcdd07948277fb3b20b1727ef2104107a05995eb</guid>
<link>https://academictorrents.com/details/bcdd07948277fb3b20b1727ef2104107a05995eb</link>
<description/>
<size>8741840</size>
</item><item>
<title>Prediction With Expert Advice For The Brier Game</title>
<category>Paper</category>
<infohash>5839ddb77c57af03dd14e4b9da0cd7ce727664ae</infohash>
<guid>https://academictorrents.com/details/5839ddb77c57af03dd14e4b9da0cd7ce727664ae</guid>
<link>https://academictorrents.com/details/5839ddb77c57af03dd14e4b9da0cd7ce727664ae</link>
<description/>
<size>1338728</size>
</item><item>
<title>The Hidden Life of Latent Variables: Bayesian Learning with Mixed Graph Models</title>
<category>Paper</category>
<infohash>3f017fc1ceb243bf9c912df7e9a81da265ff1122</infohash>
<guid>https://academictorrents.com/details/3f017fc1ceb243bf9c912df7e9a81da265ff1122</guid>
<link>https://academictorrents.com/details/3f017fc1ceb243bf9c912df7e9a81da265ff1122</link>
<description/>
<size>676531</size>
</item><item>
<title>Generalized Spike-and-Slab Priors for Bayesian Group Feature Selection Using Expectation Propagation</title>
<category>Paper</category>
<infohash>e5b57ee36c72734c8692c50356be672c12d35fed</infohash>
<guid>https://academictorrents.com/details/e5b57ee36c72734c8692c50356be672c12d35fed</guid>
<link>https://academictorrents.com/details/e5b57ee36c72734c8692c50356be672c12d35fed</link>
<description/>
<size>1656995</size>
</item><item>
<title>Estimating Labels from Label Proportions</title>
<category>Paper</category>
<infohash>8dc40264fcbfb8c203cb0fd94fa207d0162c7c53</infohash>
<guid>https://academictorrents.com/details/8dc40264fcbfb8c203cb0fd94fa207d0162c7c53</guid>
<link>https://academictorrents.com/details/8dc40264fcbfb8c203cb0fd94fa207d0162c7c53</link>
<description/>
<size>390111</size>
</item><item>
<title>Robust Process Discovery with Artificial Negative Events(Special Topic on Mining and Learning with Graphs and Relations)</title>
<category>Paper</category>
<infohash>25d166d65b19493fe44c29af341d71589179b2d3</infohash>
<guid>https://academictorrents.com/details/25d166d65b19493fe44c29af341d71589179b2d3</guid>
<link>https://academictorrents.com/details/25d166d65b19493fe44c29af341d71589179b2d3</link>
<description/>
<size>499128</size>
</item><item>
<title>Fast Approximate kNN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection</title>
<category>Paper</category>
<infohash>2a95276bf51b951ddafdf9681ff18ecba03b92ea</infohash>
<guid>https://academictorrents.com/details/2a95276bf51b951ddafdf9681ff18ecba03b92ea</guid>
<link>https://academictorrents.com/details/2a95276bf51b951ddafdf9681ff18ecba03b92ea</link>
<description/>
<size>3239712</size>
</item><item>
<title>Lovasz theta function, SVMs and Finding Dense Subgraphs</title>
<category>Paper</category>
<infohash>67debaedcfd2ce2480e419ce3a44156135d38b8a</infohash>
<guid>https://academictorrents.com/details/67debaedcfd2ce2480e419ce3a44156135d38b8a</guid>
<link>https://academictorrents.com/details/67debaedcfd2ce2480e419ce3a44156135d38b8a</link>
<description/>
<size>730305</size>
</item><item>
<title>Perturbation Corrections in Approximate Inference: Mixture Modelling Applications</title>
<category>Paper</category>
<infohash>503dcb099f984a19d76edd8303fa9915f4894edb</infohash>
<guid>https://academictorrents.com/details/503dcb099f984a19d76edd8303fa9915f4894edb</guid>
<link>https://academictorrents.com/details/503dcb099f984a19d76edd8303fa9915f4894edb</link>
<description/>
<size>863458</size>
</item><item>
<title>Stationary-Sparse Causality Network Learning</title>
<category>Paper</category>
<infohash>7a3ecd1c3f81fb6a0dda78bc1c2ff8d87758d759</infohash>
<guid>https://academictorrents.com/details/7a3ecd1c3f81fb6a0dda78bc1c2ff8d87758d759</guid>
<link>https://academictorrents.com/details/7a3ecd1c3f81fb6a0dda78bc1c2ff8d87758d759</link>
<description/>
<size>416785</size>
</item><item>
<title>Sub-Local Constraint-Based Learning of Bayesian Networks Using A Joint Dependence Criterion</title>
<category>Paper</category>
<infohash>3cc32b00695bb7bb972bbcdf8e75fb666463db86</infohash>
<guid>https://academictorrents.com/details/3cc32b00695bb7bb972bbcdf8e75fb666463db86</guid>
<link>https://academictorrents.com/details/3cc32b00695bb7bb972bbcdf8e75fb666463db86</link>
<description/>
<size>614808</size>
</item><item>
<title>Global Analytic Solution of Fully-observed Variational Bayesian Matrix Factorization</title>
<category>Paper</category>
<infohash>14361906a005e3523b5470dafe249c2b873cdcca</infohash>
<guid>https://academictorrents.com/details/14361906a005e3523b5470dafe249c2b873cdcca</guid>
<link>https://academictorrents.com/details/14361906a005e3523b5470dafe249c2b873cdcca</link>
<description/>
<size>507623</size>
</item><item>
<title>Stable and Efficient Gaussian Process Calculations</title>
<category>Paper</category>
<infohash>e57b2069c47c25a5bb00f886c925a305eceef20d</infohash>
<guid>https://academictorrents.com/details/e57b2069c47c25a5bb00f886c925a305eceef20d</guid>
<link>https://academictorrents.com/details/e57b2069c47c25a5bb00f886c925a305eceef20d</link>
<description/>
<size>246968</size>
</item><item>
<title>Gaussian Kullback-Leibler Approximate Inference</title>
<category>Paper</category>
<infohash>92db2e5b8a0e24d94e5234e948049f6bc6cc9438</infohash>
<guid>https://academictorrents.com/details/92db2e5b8a0e24d94e5234e948049f6bc6cc9438</guid>
<link>https://academictorrents.com/details/92db2e5b8a0e24d94e5234e948049f6bc6cc9438</link>
<description/>
<size>922451</size>
</item><item>
<title>Similarity-based Clustering by Left-Stochastic Matrix Factorization</title>
<category>Paper</category>
<infohash>f2dec4ca78971464b95b3abad3505fd03b5c2cc1</infohash>
<guid>https://academictorrents.com/details/f2dec4ca78971464b95b3abad3505fd03b5c2cc1</guid>
<link>https://academictorrents.com/details/f2dec4ca78971464b95b3abad3505fd03b5c2cc1</link>
<description/>
<size>317135</size>
</item><item>
<title>Hybrid MPIOpenMP Parallel Linear Support Vector Machine Training</title>
<category>Paper</category>
<infohash>cadd0708c887e5e73fceb9cbec3123f5e6150822</infohash>
<guid>https://academictorrents.com/details/cadd0708c887e5e73fceb9cbec3123f5e6150822</guid>
<link>https://academictorrents.com/details/cadd0708c887e5e73fceb9cbec3123f5e6150822</link>
<description/>
<size>133809</size>
</item><item>
<title>Deterministic Error Analysis of Support Vector Regression and Related Regularized Kernel Methods</title>
<category>Paper</category>
<infohash>1ad9699408b1524336afed9ad9a4d04295f6b691</infohash>
<guid>https://academictorrents.com/details/1ad9699408b1524336afed9ad9a4d04295f6b691</guid>
<link>https://academictorrents.com/details/1ad9699408b1524336afed9ad9a4d04295f6b691</link>
<description/>
<size>163771</size>
</item><item>
<title>Large-scale SVD and Manifold Learning</title>
<category>Paper</category>
<infohash>57bd784d8ae732ff60e330239ff43ef62c2c3abc</infohash>
<guid>https://academictorrents.com/details/57bd784d8ae732ff60e330239ff43ef62c2c3abc</guid>
<link>https://academictorrents.com/details/57bd784d8ae732ff60e330239ff43ef62c2c3abc</link>
<description/>
<size>4354683</size>
</item><item>
<title>Algorithms for Discovery of Multiple Markov Boundaries</title>
<category>Paper</category>
<infohash>68408d7bda78720ac26f8e057d8f8cde9abd5718</infohash>
<guid>https://academictorrents.com/details/68408d7bda78720ac26f8e057d8f8cde9abd5718</guid>
<link>https://academictorrents.com/details/68408d7bda78720ac26f8e057d8f8cde9abd5718</link>
<description/>
<size>831150</size>
</item><item>
<title>Online Learning with Sample Path Constraints</title>
<category>Paper</category>
<infohash>03465825ecd565378cacfa979f041b0d8f2593be</infohash>
<guid>https://academictorrents.com/details/03465825ecd565378cacfa979f041b0d8f2593be</guid>
<link>https://academictorrents.com/details/03465825ecd565378cacfa979f041b0d8f2593be</link>
<description/>
<size>192136</size>
</item><item>
<title>Reinforcement Learning in Finite MDPs: PAC Analysis</title>
<category>Paper</category>
<infohash>2cd4976d263fd6e5d4b1f8e2f22438b1b82e9cde</infohash>
<guid>https://academictorrents.com/details/2cd4976d263fd6e5d4b1f8e2f22438b1b82e9cde</guid>
<link>https://academictorrents.com/details/2cd4976d263fd6e5d4b1f8e2f22438b1b82e9cde</link>
<description/>
<size>272554</size>
</item><item>
<title>A Survey of Accuracy Evaluation Metrics of Recommendation Tasks</title>
<category>Paper</category>
<infohash>b31f8310f96f1b3563bdb7a3d11878711b63b000</infohash>
<guid>https://academictorrents.com/details/b31f8310f96f1b3563bdb7a3d11878711b63b000</guid>
<link>https://academictorrents.com/details/b31f8310f96f1b3563bdb7a3d11878711b63b000</link>
<description/>
<size>267620</size>
</item><item>
<title>Computing Maximum Likelihood Estimates in Recursive Linear Models with Correlated Errors</title>
<category>Paper</category>
<infohash>7cb135851a967364a44b500ef1382524390f5fd7</infohash>
<guid>https://academictorrents.com/details/7cb135851a967364a44b500ef1382524390f5fd7</guid>
<link>https://academictorrents.com/details/7cb135851a967364a44b500ef1382524390f5fd7</link>
<description/>
<size>174470</size>
</item><item>
<title>Quasi-Newton Method: A New Direction</title>
<category>Paper</category>
<infohash>56fa80cbecf20bd53066d0cbd4b84d5224236912</infohash>
<guid>https://academictorrents.com/details/56fa80cbecf20bd53066d0cbd4b84d5224236912</guid>
<link>https://academictorrents.com/details/56fa80cbecf20bd53066d0cbd4b84d5224236912</link>
<description/>
<size>297809</size>
</item><item>
<title>Data-driven Calibration of Penalties for Least-Squares Regression</title>
<category>Paper</category>
<infohash>fe4da40a6b0027d3ae8c9531cbe9224014e56af1</infohash>
<guid>https://academictorrents.com/details/fe4da40a6b0027d3ae8c9531cbe9224014e56af1</guid>
<link>https://academictorrents.com/details/fe4da40a6b0027d3ae8c9531cbe9224014e56af1</link>
<description/>
<size>262751</size>
</item><item>
<title>Analysis of Perceptron-Based Active Learning</title>
<category>Paper</category>
<infohash>dcff855a3bbf542f6784ee85764cb96ac13c6e55</infohash>
<guid>https://academictorrents.com/details/dcff855a3bbf542f6784ee85764cb96ac13c6e55</guid>
<link>https://academictorrents.com/details/dcff855a3bbf542f6784ee85764cb96ac13c6e55</link>
<description/>
<size>176009</size>
</item><item>
<title>Properties of Monotonic Effects on Directed Acyclic Graphs</title>
<category>Paper</category>
<infohash>78411eabf3e1484d3bd97b2b03fe8cbfb3316859</infohash>
<guid>https://academictorrents.com/details/78411eabf3e1484d3bd97b2b03fe8cbfb3316859</guid>
<link>https://academictorrents.com/details/78411eabf3e1484d3bd97b2b03fe8cbfb3316859</link>
<description/>
<size>181590</size>
</item><item>
<title>CarpeDiem: Optimizing the Viterbi Algorithm and Applications to Supervised Sequential Learning</title>
<category>Paper</category>
<infohash>59c4f8d17a1b6bf8cb10135b87dbd2309ef7c188</infohash>
<guid>https://academictorrents.com/details/59c4f8d17a1b6bf8cb10135b87dbd2309ef7c188</guid>
<link>https://academictorrents.com/details/59c4f8d17a1b6bf8cb10135b87dbd2309ef7c188</link>
<description/>
<size>1149428</size>
</item><item>
<title>Truncated Power Method for Sparse Eigenvalue Problems</title>
<category>Paper</category>
<infohash>807e9ca4e5e7303780944acba61182f8f81717d0</infohash>
<guid>https://academictorrents.com/details/807e9ca4e5e7303780944acba61182f8f81717d0</guid>
<link>https://academictorrents.com/details/807e9ca4e5e7303780944acba61182f8f81717d0</link>
<description/>
<size>645499</size>
</item><item>
<title>Sparse Matrix Inversion with Scaled Lasso</title>
<category>Paper</category>
<infohash>46e403867091a86d6ccc31f9808beedf52adf803</infohash>
<guid>https://academictorrents.com/details/46e403867091a86d6ccc31f9808beedf52adf803</guid>
<link>https://academictorrents.com/details/46e403867091a86d6ccc31f9808beedf52adf803</link>
<description/>
<size>313345</size>
</item><item>
<title>Robustness and Regularization of Support Vector Machines</title>
<category>Paper</category>
<infohash>c213084c164b0d7f80c6b816ceee199b697e2dc3</infohash>
<guid>https://academictorrents.com/details/c213084c164b0d7f80c6b816ceee199b697e2dc3</guid>
<link>https://academictorrents.com/details/c213084c164b0d7f80c6b816ceee199b697e2dc3</link>
<description/>
<size>189265</size>
</item><item>
<title>Perturbative Corrections for Approximate Inference in Gaussian Latent Variable Models</title>
<category>Paper</category>
<infohash>c039fcb1cac9bff23da0ac663e0d1bee6f4aae9c</infohash>
<guid>https://academictorrents.com/details/c039fcb1cac9bff23da0ac663e0d1bee6f4aae9c</guid>
<link>https://academictorrents.com/details/c039fcb1cac9bff23da0ac663e0d1bee6f4aae9c</link>
<description/>
<size>564841</size>
</item><item>
<title>Exploiting Product Distributions to Identify Relevant Variables of Correlation Immune Functions</title>
<category>Paper</category>
<infohash>2a0b3a40040592aef9c4c747bd0e4997360d8e3d</infohash>
<guid>https://academictorrents.com/details/2a0b3a40040592aef9c4c747bd0e4997360d8e3d</guid>
<link>https://academictorrents.com/details/2a0b3a40040592aef9c4c747bd0e4997360d8e3d</link>
<description/>
<size>314362</size>
</item><item>
<title>Keep It Simple And Sparse: Real-Time Action Recognition</title>
<category>Paper</category>
<infohash>0216db9a867c799c7da89b3a052c4a6293b4d7c8</infohash>
<guid>https://academictorrents.com/details/0216db9a867c799c7da89b3a052c4a6293b4d7c8</guid>
<link>https://academictorrents.com/details/0216db9a867c799c7da89b3a052c4a6293b4d7c8</link>
<description/>
<size>1869241</size>
</item><item>
<title>Orange: Data Mining Toolbox in Python</title>
<category>Paper</category>
<infohash>78e7fe1bb1b432f24c4e0226a25d02c2a8ca60c2</infohash>
<guid>https://academictorrents.com/details/78e7fe1bb1b432f24c4e0226a25d02c2a8ca60c2</guid>
<link>https://academictorrents.com/details/78e7fe1bb1b432f24c4e0226a25d02c2a8ca60c2</link>
<description/>
<size>63403</size>
</item><item>
<title>Multivariate Convex Regression with Adaptive Partitioning</title>
<category>Paper</category>
<infohash>0e27179d76982da01c8341a89719f8f227ee9ae4</infohash>
<guid>https://academictorrents.com/details/0e27179d76982da01c8341a89719f8f227ee9ae4</guid>
<link>https://academictorrents.com/details/0e27179d76982da01c8341a89719f8f227ee9ae4</link>
<description/>
<size>1729398</size>
</item><item>
<title>Feature Selection with Ensembles, Artificial Variables, and Redundancy Elimination(Special Topic on Model Selection)</title>
<category>Paper</category>
<infohash>a6f1ed86b6f94fa6834470612936ff64da1a8ae9</infohash>
<guid>https://academictorrents.com/details/a6f1ed86b6f94fa6834470612936ff64da1a8ae9</guid>
<link>https://academictorrents.com/details/a6f1ed86b6f94fa6834470612936ff64da1a8ae9</link>
<description/>
<size>200941</size>
</item><item>
<title>Dynamic Affine-Invariant Shape-Appearance Handshape Features and Classification in Sign Language Videos</title>
<category>Paper</category>
<infohash>6a0ba63fce87c30813082fc424ed2328836b7bbf</infohash>
<guid>https://academictorrents.com/details/6a0ba63fce87c30813082fc424ed2328836b7bbf</guid>
<link>https://academictorrents.com/details/6a0ba63fce87c30813082fc424ed2328836b7bbf</link>
<description/>
<size>3980870</size>
</item><item>
<title>Distributions of Angles in Random Packing on Spheres</title>
<category>Paper</category>
<infohash>2cef7ef506fad29fe636bc6c67d7dd3045bc011b</infohash>
<guid>https://academictorrents.com/details/2cef7ef506fad29fe636bc6c67d7dd3045bc011b</guid>
<link>https://academictorrents.com/details/2cef7ef506fad29fe636bc6c67d7dd3045bc011b</link>
<description/>
<size>276012</size>
</item><item>
<title>Alleviating Naive Bayes Attribute Independence Assumption by Attribute Weighting</title>
<category>Paper</category>
<infohash>13a66360265771ac37ad4fe29881cebb33a764e7</infohash>
<guid>https://academictorrents.com/details/13a66360265771ac37ad4fe29881cebb33a764e7</guid>
<link>https://academictorrents.com/details/13a66360265771ac37ad4fe29881cebb33a764e7</link>
<description/>
<size>442448</size>
</item><item>
<title>Greedy Sparsity-Constrained Optimization</title>
<category>Paper</category>
<infohash>3586e9c854fd5ee46fee11267c739f656e5cbafc</infohash>
<guid>https://academictorrents.com/details/3586e9c854fd5ee46fee11267c739f656e5cbafc</guid>
<link>https://academictorrents.com/details/3586e9c854fd5ee46fee11267c739f656e5cbafc</link>
<description/>
<size>362499</size>
</item><item>
<title>Lower Bounds and Selectivity of Weak-Consistent Policies in Stochastic Multi-Armed Bandit Problem</title>
<category>Paper</category>
<infohash>9e40b4952217ba1d7e8995563dfa9bb7ed456506</infohash>
<guid>https://academictorrents.com/details/9e40b4952217ba1d7e8995563dfa9bb7ed456506</guid>
<link>https://academictorrents.com/details/9e40b4952217ba1d7e8995563dfa9bb7ed456506</link>
<description/>
<size>200238</size>
</item><item>
<title>The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs</title>
<category>Paper</category>
<infohash>0e020decfe281838b7b7509664c72c4fc51c4cb0</infohash>
<guid>https://academictorrents.com/details/0e020decfe281838b7b7509664c72c4fc51c4cb0</guid>
<link>https://academictorrents.com/details/0e020decfe281838b7b7509664c72c4fc51c4cb0</link>
<description/>
<size>1532710</size>
</item><item>
<title>BudgetedSVM: A Toolbox for Scalable SVM Approximations</title>
<category>Paper</category>
<infohash>b15610978ec89ae1a633e4f4a9dca94a0e22815a</infohash>
<guid>https://academictorrents.com/details/b15610978ec89ae1a633e4f4a9dca94a0e22815a</guid>
<link>https://academictorrents.com/details/b15610978ec89ae1a633e4f4a9dca94a0e22815a</link>
<description/>
<size>74781</size>
</item><item>
<title>Ranked Bandits in Metric Spaces: Learning Diverse Rankings over Large Document Collections</title>
<category>Paper</category>
<infohash>07eaf4e61b499e738b1283822e747bdb6c822993</infohash>
<guid>https://academictorrents.com/details/07eaf4e61b499e738b1283822e747bdb6c822993</guid>
<link>https://academictorrents.com/details/07eaf4e61b499e738b1283822e747bdb6c822993</link>
<description/>
<size>325553</size>
</item><item>
<title>Sparse Activity and Sparse Connectivity in Supervised Learning</title>
<category>Paper</category>
<infohash>4185bdbd907ae4cfa646f255c79f7382fdb0b3a6</infohash>
<guid>https://academictorrents.com/details/4185bdbd907ae4cfa646f255c79f7382fdb0b3a6</guid>
<link>https://academictorrents.com/details/4185bdbd907ae4cfa646f255c79f7382fdb0b3a6</link>
<description/>
<size>526246</size>
</item><item>
<title>Classifying With Confidence From Incomplete Information</title>
<category>Paper</category>
<infohash>1626b334c3611184069e15f2d4a38d284c559405</infohash>
<guid>https://academictorrents.com/details/1626b334c3611184069e15f2d4a38d284c559405</guid>
<link>https://academictorrents.com/details/1626b334c3611184069e15f2d4a38d284c559405</link>
<description/>
<size>1091459</size>
</item><item>
<title>Learning Theory Analysis for Association Rules and Sequential Event Prediction</title>
<category>Paper</category>
<infohash>58947cb7a9cae9ed60c2a86027f988d5c645272f</infohash>
<guid>https://academictorrents.com/details/58947cb7a9cae9ed60c2a86027f988d5c645272f</guid>
<link>https://academictorrents.com/details/58947cb7a9cae9ed60c2a86027f988d5c645272f</link>
<description/>
<size>1985899</size>
</item><item>
<title>Optimal Discovery with Probabilistic Expert Advice: Finite Time Analysis and Macroscopic Optimality</title>
<category>Paper</category>
<infohash>6d1c604147dd7bd88d31a8ca7415bc63ea71f2f5</infohash>
<guid>https://academictorrents.com/details/6d1c604147dd7bd88d31a8ca7415bc63ea71f2f5</guid>
<link>https://academictorrents.com/details/6d1c604147dd7bd88d31a8ca7415bc63ea71f2f5</link>
<description/>
<size>327203</size>
</item><item>
<title>Fast MCMC Sampling for Markov Jump Processes and Extensions</title>
<category>Paper</category>
<infohash>f25629fe4d9c718ae5c024e4c4abe514985c4429</infohash>
<guid>https://academictorrents.com/details/f25629fe4d9c718ae5c024e4c4abe514985c4429</guid>
<link>https://academictorrents.com/details/f25629fe4d9c718ae5c024e4c4abe514985c4429</link>
<description/>
<size>393640</size>
</item><item>
<title>A Least-squares Approach to Direct Importance Estimation</title>
<category>Paper</category>
<infohash>1052dae93cf2dfdee284c06e36384b0c120508ee</infohash>
<guid>https://academictorrents.com/details/1052dae93cf2dfdee284c06e36384b0c120508ee</guid>
<link>https://academictorrents.com/details/1052dae93cf2dfdee284c06e36384b0c120508ee</link>
<description/>
<size>849232</size>
</item><item>
<title>One-shot Learning Gesture Recognition from RGB-D Data Using Bag of Features</title>
<category>Paper</category>
<infohash>14e4b3fffcbf8c4e2497ab35c3ea0c946d16b433</infohash>
<guid>https://academictorrents.com/details/14e4b3fffcbf8c4e2497ab35c3ea0c946d16b433</guid>
<link>https://academictorrents.com/details/14e4b3fffcbf8c4e2497ab35c3ea0c946d16b433</link>
<description/>
<size>2952074</size>
</item><item>
<title>Kernel Bayes' Rule: Bayesian Inference with Positive Definite Kernels</title>
<category>Paper</category>
<infohash>8e407b6e1ef0d8ab9719e4370dab098e5a91f3df</infohash>
<guid>https://academictorrents.com/details/8e407b6e1ef0d8ab9719e4370dab098e5a91f3df</guid>
<link>https://academictorrents.com/details/8e407b6e1ef0d8ab9719e4370dab098e5a91f3df</link>
<description/>
<size>329174</size>
</item><item>
<title>Dimension Independent Similarity Computation</title>
<category>Paper</category>
<infohash>575f027a9c75e3447aae3454e06ad507d974b218</infohash>
<guid>https://academictorrents.com/details/575f027a9c75e3447aae3454e06ad507d974b218</guid>
<link>https://academictorrents.com/details/575f027a9c75e3447aae3454e06ad507d974b218</link>
<description/>
<size>190022</size>
</item><item>
<title>Ranking Forests</title>
<category>Paper</category>
<infohash>647b5acb68f125d42953e0a3b64618d8e49ef188</infohash>
<guid>https://academictorrents.com/details/647b5acb68f125d42953e0a3b64618d8e49ef188</guid>
<link>https://academictorrents.com/details/647b5acb68f125d42953e0a3b64618d8e49ef188</link>
<description/>
<size>1006171</size>
</item><item>
<title>Pairwise Likelihood Ratios for Estimation of Non-Gaussian Structural Equation Models</title>
<category>Paper</category>
<infohash>a7fefbb22f4272f4874df581d31c9f4c999d6fe3</infohash>
<guid>https://academictorrents.com/details/a7fefbb22f4272f4874df581d31c9f4c999d6fe3</guid>
<link>https://academictorrents.com/details/a7fefbb22f4272f4874df581d31c9f4c999d6fe3</link>
<description/>
<size>1163343</size>
</item><item>
<title>On the Convergence of Maximum Variance Unfolding</title>
<category>Paper</category>
<infohash>7724e3ac4793964830bbb1197c1d4b8055077eb4</infohash>
<guid>https://academictorrents.com/details/7724e3ac4793964830bbb1197c1d4b8055077eb4</guid>
<link>https://academictorrents.com/details/7724e3ac4793964830bbb1197c1d4b8055077eb4</link>
<description/>
<size>219460</size>
</item><item>
<title>Differential Privacy for Functions and Functional Data</title>
<category>Paper</category>
<infohash>7c41af39ec87531026ca7613e292cb9fbadcfbb9</infohash>
<guid>https://academictorrents.com/details/7c41af39ec87531026ca7613e292cb9fbadcfbb9</guid>
<link>https://academictorrents.com/details/7c41af39ec87531026ca7613e292cb9fbadcfbb9</link>
<description/>
<size>314746</size>
</item><item>
<title>Conjugate Relation between Loss Functions and Uncertainty Sets in Classification Problems</title>
<category>Paper</category>
<infohash>2ba79fabb855edbf6822f941d893e5d59491f585</infohash>
<guid>https://academictorrents.com/details/2ba79fabb855edbf6822f941d893e5d59491f585</guid>
<link>https://academictorrents.com/details/2ba79fabb855edbf6822f941d893e5d59491f585</link>
<description/>
<size>535153</size>
</item><item>
<title>Divvy: Fast and Intuitive Exploratory Data Analysis</title>
<category>Paper</category>
<infohash>f336d8efafd0aaeae8b622e29b1b95bf3d22597d</infohash>
<guid>https://academictorrents.com/details/f336d8efafd0aaeae8b622e29b1b95bf3d22597d</guid>
<link>https://academictorrents.com/details/f336d8efafd0aaeae8b622e29b1b95bf3d22597d</link>
<description/>
<size>276717</size>
</item><item>
<title>CODA: High Dimensional Copula Discriminant Analysis</title>
<category>Paper</category>
<infohash>65253d2fb4e703172a02fed3a3370fdc617e22ca</infohash>
<guid>https://academictorrents.com/details/65253d2fb4e703172a02fed3a3370fdc617e22ca</guid>
<link>https://academictorrents.com/details/65253d2fb4e703172a02fed3a3370fdc617e22ca</link>
<description/>
<size>399297</size>
</item><item>
<title>Using Symmetry and Evolutionary Search to Minimize Sorting Networks</title>
<category>Paper</category>
<infohash>681dd00655f3a45ecf9a46a8bf5f3602bfe9292a</infohash>
<guid>https://academictorrents.com/details/681dd00655f3a45ecf9a46a8bf5f3602bfe9292a</guid>
<link>https://academictorrents.com/details/681dd00655f3a45ecf9a46a8bf5f3602bfe9292a</link>
<description/>
<size>507961</size>
</item><item>
<title>A Framework for Evaluating Approximation Methods for Gaussian Process Regression</title>
<category>Paper</category>
<infohash>6d758dd0a91c0b6fd19b560b21f7af83e60f9de3</infohash>
<guid>https://academictorrents.com/details/6d758dd0a91c0b6fd19b560b21f7af83e60f9de3</guid>
<link>https://academictorrents.com/details/6d758dd0a91c0b6fd19b560b21f7af83e60f9de3</link>
<description/>
<size>331766</size>
</item><item>
<title>Bayesian Nonparametric Hidden Semi-Markov Models</title>
<category>Paper</category>
<infohash>b36dbb5196a73a82affdf3e546b7fa51ba60bb52</infohash>
<guid>https://academictorrents.com/details/b36dbb5196a73a82affdf3e546b7fa51ba60bb52</guid>
<link>https://academictorrents.com/details/b36dbb5196a73a82affdf3e546b7fa51ba60bb52</link>
<description/>
<size>586134</size>
</item><item>
<title>Stochastic Variational Inference</title>
<category>Paper</category>
<infohash>20a05bcb487cd0d3677f086c964d3e0059537dee</infohash>
<guid>https://academictorrents.com/details/20a05bcb487cd0d3677f086c964d3e0059537dee</guid>
<link>https://academictorrents.com/details/20a05bcb487cd0d3677f086c964d3e0059537dee</link>
<description/>
<size>397775</size>
</item><item>
<title>Comment on "Robustness and Regularization of Support Vector Machines" by H. Xu et al. (Journal of Machine Learning Research, vol. 10, pp. 1485-1510, 2009)</title>
<category>Paper</category>
<infohash>d3fa0705e62cc295e0d843df0907221f026ef597</infohash>
<guid>https://academictorrents.com/details/d3fa0705e62cc295e0d843df0907221f026ef597</guid>
<link>https://academictorrents.com/details/d3fa0705e62cc295e0d843df0907221f026ef597</link>
<description/>
<size>41090</size>
</item><item>
<title>On the Mutual Nearest Neighbors Estimate in Regression</title>
<category>Paper</category>
<infohash>f4682214947aed9fd99205372630202aa1ae315f</infohash>
<guid>https://academictorrents.com/details/f4682214947aed9fd99205372630202aa1ae315f</guid>
<link>https://academictorrents.com/details/f4682214947aed9fd99205372630202aa1ae315f</link>
<description/>
<size>142780</size>
</item><item>
<title>Training Energy-Based Models for Time-Series Imputation</title>
<category>Paper</category>
<infohash>0c162286c06b90e38d7218d05b5112eda0f1a1c1</infohash>
<guid>https://academictorrents.com/details/0c162286c06b90e38d7218d05b5112eda0f1a1c1</guid>
<link>https://academictorrents.com/details/0c162286c06b90e38d7218d05b5112eda0f1a1c1</link>
<description/>
<size>419200</size>
</item><item>
<title>Greedy Feature Selection for Subspace Clustering</title>
<category>Paper</category>
<infohash>b796e5aa53966f8ce62ee9a365f081df7d7be9cf</infohash>
<guid>https://academictorrents.com/details/b796e5aa53966f8ce62ee9a365f081df7d7be9cf</guid>
<link>https://academictorrents.com/details/b796e5aa53966f8ce62ee9a365f081df7d7be9cf</link>
<description/>
<size>956724</size>
</item><item>
<title>Query Induction with Schema-Guided Pruning Strategies</title>
<category>Paper</category>
<infohash>4217eafb21e63fbea07b82c407c23b1348574bd2</infohash>
<guid>https://academictorrents.com/details/4217eafb21e63fbea07b82c407c23b1348574bd2</guid>
<link>https://academictorrents.com/details/4217eafb21e63fbea07b82c407c23b1348574bd2</link>
<description/>
<size>424419</size>
</item><item>
<title>Convex and Scalable Weakly Labeled SVMs</title>
<category>Paper</category>
<infohash>eb3bb1fc339cbee797f5d00a54ab05711852cc34</infohash>
<guid>https://academictorrents.com/details/eb3bb1fc339cbee797f5d00a54ab05711852cc34</guid>
<link>https://academictorrents.com/details/eb3bb1fc339cbee797f5d00a54ab05711852cc34</link>
<description/>
<size>1026847</size>
</item><item>
<title>MLPACK: A Scalable C++ Machine Learning Library</title>
<category>Paper</category>
<infohash>421ca1dc8f130655ce397a1c8debc783c02cbe36</infohash>
<guid>https://academictorrents.com/details/421ca1dc8f130655ce397a1c8debc783c02cbe36</guid>
<link>https://academictorrents.com/details/421ca1dc8f130655ce397a1c8debc783c02cbe36</link>
<description/>
<size>68230</size>
</item><item>
<title>Cluster Analysis: Unsupervised Learning via Supervised Learning with a Non-convex Penalty</title>
<category>Paper</category>
<infohash>993186991db28b9dc900b79d8f0ebadd2cbc52bd</infohash>
<guid>https://academictorrents.com/details/993186991db28b9dc900b79d8f0ebadd2cbc52bd</guid>
<link>https://academictorrents.com/details/993186991db28b9dc900b79d8f0ebadd2cbc52bd</link>
<description/>
<size>240325</size>
</item><item>
<title>A Binary-Classification-Based Metric between Time-Series Distributions and Its Use in Statistical and Learning Problems</title>
<category>Paper</category>
<infohash>c2d3ecf0482fc560115374b49811ec987841f1f0</infohash>
<guid>https://academictorrents.com/details/c2d3ecf0482fc560115374b49811ec987841f1f0</guid>
<link>https://academictorrents.com/details/c2d3ecf0482fc560115374b49811ec987841f1f0</link>
<description/>
<size>207382</size>
</item><item>
<title>Nonparametric Sparsity and Regularization</title>
<category>Paper</category>
<infohash>e7495e1384e497a1fd0abb0681d5c385501c24aa</infohash>
<guid>https://academictorrents.com/details/e7495e1384e497a1fd0abb0681d5c385501c24aa</guid>
<link>https://academictorrents.com/details/e7495e1384e497a1fd0abb0681d5c385501c24aa</link>
<description/>
<size>612850</size>
</item><item>
<title>Language-Motivated Approaches to Action Recognition</title>
<category>Paper</category>
<infohash>0ec67ef0c5b2bd52c368fefa0adcd4d1c1acb6a4</infohash>
<guid>https://academictorrents.com/details/0ec67ef0c5b2bd52c368fefa0adcd4d1c1acb6a4</guid>
<link>https://academictorrents.com/details/0ec67ef0c5b2bd52c368fefa0adcd4d1c1acb6a4</link>
<description/>
<size>1740589</size>
</item><item>
<title>A Max-Norm Constrained Minimization Approach to 1-Bit Matrix Completion</title>
<category>Paper</category>
<infohash>9264f7bc4398f050d9bcf4de98fae7ea9084b21f</infohash>
<guid>https://academictorrents.com/details/9264f7bc4398f050d9bcf4de98fae7ea9084b21f</guid>
<link>https://academictorrents.com/details/9264f7bc4398f050d9bcf4de98fae7ea9084b21f</link>
<description/>
<size>269057</size>
</item><item>
<title>Finding Optimal Bayesian Networks Using Precedence Constraints</title>
<category>Paper</category>
<infohash>849050c3f0bc3e01c779052bdf08fa154bc15035</infohash>
<guid>https://academictorrents.com/details/849050c3f0bc3e01c779052bdf08fa154bc15035</guid>
<link>https://academictorrents.com/details/849050c3f0bc3e01c779052bdf08fa154bc15035</link>
<description/>
<size>343106</size>
</item><item>
<title>Risk Bounds of Learning Processes for Lvy Processes</title>
<category>Paper</category>
<infohash>6c4bc5c78e4ef49baa1f149efc17be6043c6ec80</infohash>
<guid>https://academictorrents.com/details/6c4bc5c78e4ef49baa1f149efc17be6043c6ec80</guid>
<link>https://academictorrents.com/details/6c4bc5c78e4ef49baa1f149efc17be6043c6ec80</link>
<description/>
<size>273289</size>
</item><item>
<title>GURLS: A Least Squares Library for Supervised Learning</title>
<category>Paper</category>
<infohash>b4c0f6ea0506eeff72c634ec5fda84d1642e7bd2</infohash>
<guid>https://academictorrents.com/details/b4c0f6ea0506eeff72c634ec5fda84d1642e7bd2</guid>
<link>https://academictorrents.com/details/b4c0f6ea0506eeff72c634ec5fda84d1642e7bd2</link>
<description/>
<size>76030</size>
</item><item>
<title>Bayesian Canonical Correlation Analysis</title>
<category>Paper</category>
<infohash>eee108e08076ee344aa51cdfc9734c797120602e</infohash>
<guid>https://academictorrents.com/details/eee108e08076ee344aa51cdfc9734c797120602e</guid>
<link>https://academictorrents.com/details/eee108e08076ee344aa51cdfc9734c797120602e</link>
<description/>
<size>515468</size>
</item><item>
<title>Efficient Active Learning of Halfspaces: An Aggressive Approach</title>
<category>Paper</category>
<infohash>d7c4adf97cb54719016ab5fd79b1b371a84515b5</infohash>
<guid>https://academictorrents.com/details/d7c4adf97cb54719016ab5fd79b1b371a84515b5</guid>
<link>https://academictorrents.com/details/d7c4adf97cb54719016ab5fd79b1b371a84515b5</link>
<description/>
<size>2894421</size>
</item><item>
<title>The Rate of Convergence of AdaBoost</title>
<category>Paper</category>
<infohash>ff0b5f0827dd3d3c6d8cda578c71eb14e96e72fa</infohash>
<guid>https://academictorrents.com/details/ff0b5f0827dd3d3c6d8cda578c71eb14e96e72fa</guid>
<link>https://academictorrents.com/details/ff0b5f0827dd3d3c6d8cda578c71eb14e96e72fa</link>
<description/>
<size>314876</size>
</item><item>
<title>The CAM Software for Nonnegative Blind Source Separation in R-Java</title>
<category>Paper</category>
<infohash>84f3cc29df08fb56897e273e4bb9ddb88826ddcd</infohash>
<guid>https://academictorrents.com/details/84f3cc29df08fb56897e273e4bb9ddb88826ddcd</guid>
<link>https://academictorrents.com/details/84f3cc29df08fb56897e273e4bb9ddb88826ddcd</link>
<description/>
<size>637864</size>
</item><item>
<title>Learning Theory Approach to Minimum Error Entropy Criterion</title>
<category>Paper</category>
<infohash>4be84fdbfbea612cf6152061b6e9bcabdb08d727</infohash>
<guid>https://academictorrents.com/details/4be84fdbfbea612cf6152061b6e9bcabdb08d727</guid>
<link>https://academictorrents.com/details/4be84fdbfbea612cf6152061b6e9bcabdb08d727</link>
<description/>
<size>194956</size>
</item><item>
<title>Segregating Event Streams and Noise with a Markov Renewal Process Model</title>
<category>Paper</category>
<infohash>914bd8a803ca859ce68c8f590a0c201bc4f392c5</infohash>
<guid>https://academictorrents.com/details/914bd8a803ca859ce68c8f590a0c201bc4f392c5</guid>
<link>https://academictorrents.com/details/914bd8a803ca859ce68c8f590a0c201bc4f392c5</link>
<description/>
<size>744310</size>
</item><item>
<title>Message-Passing Algorithms for Quadratic Minimization</title>
<category>Paper</category>
<infohash>264e2cd52f86ee907b6b4f3d433b2c5e21cb6c9c</infohash>
<guid>https://academictorrents.com/details/264e2cd52f86ee907b6b4f3d433b2c5e21cb6c9c</guid>
<link>https://academictorrents.com/details/264e2cd52f86ee907b6b4f3d433b2c5e21cb6c9c</link>
<description/>
<size>264041</size>
</item><item>
<title>Semi-Supervised Learning Using Greedy Max-Cut</title>
<category>Paper</category>
<infohash>86353516df61c721c88727c5e08102d62f5bb0c8</infohash>
<guid>https://academictorrents.com/details/86353516df61c721c88727c5e08102d62f5bb0c8</guid>
<link>https://academictorrents.com/details/86353516df61c721c88727c5e08102d62f5bb0c8</link>
<description/>
<size>1493677</size>
</item><item>
<title>A Theory of Multiclass Boosting</title>
<category>Paper</category>
<infohash>345fadbd720a0c19cceae72cd7bcbf3097e4709d</infohash>
<guid>https://academictorrents.com/details/345fadbd720a0c19cceae72cd7bcbf3097e4709d</guid>
<link>https://academictorrents.com/details/345fadbd720a0c19cceae72cd7bcbf3097e4709d</link>
<description/>
<size>2956784</size>
</item><item>
<title>Multicategory Large-Margin Unified Machines</title>
<category>Paper</category>
<infohash>f974c362bb4dd7389d57ca440ce30cbe6d4c1da2</infohash>
<guid>https://academictorrents.com/details/f974c362bb4dd7389d57ca440ce30cbe6d4c1da2</guid>
<link>https://academictorrents.com/details/f974c362bb4dd7389d57ca440ce30cbe6d4c1da2</link>
<description/>
<size>612665</size>
</item><item>
<title>Classifier Selection using the Predicate Depth</title>
<category>Paper</category>
<infohash>214e8915b8577100002a71fa3d47a86b1a029909</infohash>
<guid>https://academictorrents.com/details/214e8915b8577100002a71fa3d47a86b1a029909</guid>
<link>https://academictorrents.com/details/214e8915b8577100002a71fa3d47a86b1a029909</link>
<description/>
<size>243217</size>
</item><item>
<title>Derivative Estimation with Local Polynomial Fitting</title>
<category>Paper</category>
<infohash>229007c16859c349db75d9291007dc6d9164a3d2</infohash>
<guid>https://academictorrents.com/details/229007c16859c349db75d9291007dc6d9164a3d2</guid>
<link>https://academictorrents.com/details/229007c16859c349db75d9291007dc6d9164a3d2</link>
<description/>
<size>590779</size>
</item><item>
<title>PC Algorithm for Nonparanormal Graphical Models</title>
<category>Paper</category>
<infohash>4b00a46b4e282e2374c3828c5c0afe0d52cf106e</infohash>
<guid>https://academictorrents.com/details/4b00a46b4e282e2374c3828c5c0afe0d52cf106e</guid>
<link>https://academictorrents.com/details/4b00a46b4e282e2374c3828c5c0afe0d52cf106e</link>
<description/>
<size>177771</size>
</item><item>
<title>Distribution-Dependent Sample Complexity of Large Margin Learning</title>
<category>Paper</category>
<infohash>f6d27117b042204a40e7e190329034d663c1b108</infohash>
<guid>https://academictorrents.com/details/f6d27117b042204a40e7e190329034d663c1b108</guid>
<link>https://academictorrents.com/details/f6d27117b042204a40e7e190329034d663c1b108</link>
<description/>
<size>335735</size>
</item><item>
<title>Multi-Stage Multi-Task Feature Learning</title>
<category>Paper</category>
<infohash>3e14552bdb167b2b7a22de8cb7dd0c96de0097f7</infohash>
<guid>https://academictorrents.com/details/3e14552bdb167b2b7a22de8cb7dd0c96de0097f7</guid>
<link>https://academictorrents.com/details/3e14552bdb167b2b7a22de8cb7dd0c96de0097f7</link>
<description/>
<size>303999</size>
</item><item>
<title>Sparse Single-Index Model</title>
<category>Paper</category>
<infohash>61059f50d1a161c60b8c1ea44ad0e8d3090bdd1b</infohash>
<guid>https://academictorrents.com/details/61059f50d1a161c60b8c1ea44ad0e8d3090bdd1b</guid>
<link>https://academictorrents.com/details/61059f50d1a161c60b8c1ea44ad0e8d3090bdd1b</link>
<description/>
<size>321557</size>
</item><item>
<title>Random Spanning Trees and the Prediction of Weighted Graphs</title>
<category>Paper</category>
<infohash>66ed1109aa9929c9eb1c24516b32955b7150b2f3</infohash>
<guid>https://academictorrents.com/details/66ed1109aa9929c9eb1c24516b32955b7150b2f3</guid>
<link>https://academictorrents.com/details/66ed1109aa9929c9eb1c24516b32955b7150b2f3</link>
<description/>
<size>369540</size>
</item><item>
<title>Construction of Approximation Spaces for Reinforcement Learning</title>
<category>Paper</category>
<infohash>84509a4a078fa60891ac06266b40a5e462e5aa21</infohash>
<guid>https://academictorrents.com/details/84509a4a078fa60891ac06266b40a5e462e5aa21</guid>
<link>https://academictorrents.com/details/84509a4a078fa60891ac06266b40a5e462e5aa21</link>
<description/>
<size>778545</size>
</item><item>
<title>A C++ Template-Based Reinforcement Learning Library: Fitting the Code to the Mathematics</title>
<category>Paper</category>
<infohash>8e8a341c6948e0f1e4f70fd0386aa5a408bdb07f</infohash>
<guid>https://academictorrents.com/details/8e8a341c6948e0f1e4f70fd0386aa5a408bdb07f</guid>
<link>https://academictorrents.com/details/8e8a341c6948e0f1e4f70fd0386aa5a408bdb07f</link>
<description/>
<size>70368</size>
</item><item>
<title>Optimally Fuzzy Temporal Memory</title>
<category>Paper</category>
<infohash>d9fd9cf32c98c6065ee9f1bcfcefe1eb31bd0153</infohash>
<guid>https://academictorrents.com/details/d9fd9cf32c98c6065ee9f1bcfcefe1eb31bd0153</guid>
<link>https://academictorrents.com/details/d9fd9cf32c98c6065ee9f1bcfcefe1eb31bd0153</link>
<description/>
<size>674312</size>
</item><item>
<title>Fast Generalized Subset Scan for Anomalous Pattern Detection</title>
<category>Paper</category>
<infohash>d0b31ee6d5931a855f800299083e6e7aa3941f17</infohash>
<guid>https://academictorrents.com/details/d0b31ee6d5931a855f800299083e6e7aa3941f17</guid>
<link>https://academictorrents.com/details/d0b31ee6d5931a855f800299083e6e7aa3941f17</link>
<description/>
<size>2755573</size>
</item><item>
<title>A Risk Comparison of Ordinary Least Squares vs Ridge Regression</title>
<category>Paper</category>
<infohash>6c43ba1182eb57633acdfd0ff0dff42d96d34abc</infohash>
<guid>https://academictorrents.com/details/6c43ba1182eb57633acdfd0ff0dff42d96d34abc</guid>
<link>https://academictorrents.com/details/6c43ba1182eb57633acdfd0ff0dff42d96d34abc</link>
<description/>
<size>189762</size>
</item><item>
<title>QuantMiner for Mining Quantitative Association Rules</title>
<category>Paper</category>
<infohash>e2d6ac42e4b4e038afc01aab5c8419bf75f3f85a</infohash>
<guid>https://academictorrents.com/details/e2d6ac42e4b4e038afc01aab5c8419bf75f3f85a</guid>
<link>https://academictorrents.com/details/e2d6ac42e4b4e038afc01aab5c8419bf75f3f85a</link>
<description/>
<size>319048</size>
</item><item>
<title>Consistent Selection of Tuning Parameters via Variable Selection Stability</title>
<category>Paper</category>
<infohash>56b0927f2f3ac7afcd7813e81edcd542582667f2</infohash>
<guid>https://academictorrents.com/details/56b0927f2f3ac7afcd7813e81edcd542582667f2</guid>
<link>https://academictorrents.com/details/56b0927f2f3ac7afcd7813e81edcd542582667f2</link>
<description/>
<size>251371</size>
</item><item>
<title>Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows</title>
<category>Paper</category>
<infohash>b863eefbdcc6404f3be91c3d095435a1b96f5b70</infohash>
<guid>https://academictorrents.com/details/b863eefbdcc6404f3be91c3d095435a1b96f5b70</guid>
<link>https://academictorrents.com/details/b863eefbdcc6404f3be91c3d095435a1b96f5b70</link>
<description/>
<size>373945</size>
</item><item>
<title>Machine Learning with Operational Costs</title>
<category>Paper</category>
<infohash>15efe86562dcb326f78d160eb9bd2526a5e4a431</infohash>
<guid>https://academictorrents.com/details/15efe86562dcb326f78d160eb9bd2526a5e4a431</guid>
<link>https://academictorrents.com/details/15efe86562dcb326f78d160eb9bd2526a5e4a431</link>
<description/>
<size>824494</size>
</item><item>
<title>Random Walk Kernels and Learning Curves for Gaussian Process Regression on Random Graphs</title>
<category>Paper</category>
<infohash>ace4c246b027b1f9b6e8c1754308eb1cda6c898e</infohash>
<guid>https://academictorrents.com/details/ace4c246b027b1f9b6e8c1754308eb1cda6c898e</guid>
<link>https://academictorrents.com/details/ace4c246b027b1f9b6e8c1754308eb1cda6c898e</link>
<description/>
<size>517105</size>
</item><item>
<title>Learning Bilinear Model for Matching Queries and Documents</title>
<category>Paper</category>
<infohash>afbdf200cac57471377e6b8398f4ce71159f7425</infohash>
<guid>https://academictorrents.com/details/afbdf200cac57471377e6b8398f4ce71159f7425</guid>
<link>https://academictorrents.com/details/afbdf200cac57471377e6b8398f4ce71159f7425</link>
<description/>
<size>350476</size>
</item><item>
<title>Stress Functions for Nonlinear Dimension Reduction, Proximity Analysis, and Graph Drawing</title>
<category>Paper</category>
<infohash>78f800a01c5a4f423e355d96868684c18ca6bd37</infohash>
<guid>https://academictorrents.com/details/78f800a01c5a4f423e355d96868684c18ca6bd37</guid>
<link>https://academictorrents.com/details/78f800a01c5a4f423e355d96868684c18ca6bd37</link>
<description/>
<size>593035</size>
</item><item>
<title>Census-Rural-Urban NTIA-Mashup</title>
<category>Dataset</category>
<infohash>c3cdd751babb66eefc1c45564838bd3769fad0a9</infohash>
<guid>https://academictorrents.com/details/c3cdd751babb66eefc1c45564838bd3769fad0a9</guid>
<link>https://academictorrents.com/details/c3cdd751babb66eefc1c45564838bd3769fad0a9</link>
<description>The file "Census-Rural-Urban NTIA-Mashup.tab" is a listing of all US Census blocks with urban/rural designation, combined with data from the National Telecommunications and Information Administration s National Broadband Map project.</description>
<size>4737480854</size>
</item><item>
<title>Twitter Data - NIPS 2012</title>
<category>Dataset</category>
<infohash>046cf7a75db2a530b1505a4ce125fbe0031f4661</infohash>
<guid>https://academictorrents.com/details/046cf7a75db2a530b1505a4ce125fbe0031f4661</guid>
<link>https://academictorrents.com/details/046cf7a75db2a530b1505a4ce125fbe0031f4661</link>
<description>This dataset consists of  circles  (or  lists ) from Twitter. Twitter data was crawled from public sources. The dataset includes node features (profiles), circles, and ego networks. ##Dataset statistics |Attribute|Value| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| |Nodes|	81306| |Edges	|1768149| |Nodes in largest WCC	|81306 (1.000)| |Edges in largest WCC|	1768149 (1.000)| |Nodes in largest SCC	|68413 (0.841)| |Edges in largest SCC|	1685163 (0.953)| |Average clustering coefficient	|0.5653| |Number of triangles|	13082506| |Fraction of closed triangles|	0.06415| |Diameter (longest shortest path)|	7| |90-percentile effective diameter	|4.5| ##Source (citation) J. McAuley and J. Leskovec. Learning to Discover Social Circles in Ego Networks. NIPS, 2012. ##Files: |Attribute|Value| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;&amp;mdash;-| |nodeId.edges |The edges in the ego network for the node  nodeId . Edges are undirected for facebook, and directed (a follows b) for twitter and gplus. The  ego  node does not appear, but it is assumed that they follow every node id that appears in this file.| |nodeId.circles |The set of circles for the ego node. Each line contains one circle, consisting of a series of node ids. The first entry in each line is the name of the circle.| |nodeId.feat |The features for each of the nodes that appears in the edge file.| |nodeId.egofeat |The features for the ego user.| |nodeId.featnames |The names of each of the feature dimensions. Features are  1  if the user has this property in their profile, and  0  otherwise. This file has been anonymized for facebook users, since the names of the features would reveal private data.|</description>
<size>22339604</size>
</item><item>
<title>Introduction to Theory of Computation</title>
<category>Paper</category>
<infohash>d746df46e76fcfdb8e0b682a5e47ed1a776db7db</infohash>
<guid>https://academictorrents.com/details/d746df46e76fcfdb8e0b682a5e47ed1a776db7db</guid>
<link>https://academictorrents.com/details/d746df46e76fcfdb8e0b682a5e47ed1a776db7db</link>
<description>This is a free textbook for an undergraduate course on the Theory of Computation, which we have been teaching at Carleton University since 2002. Until the 2011/2012 academic year, this course was offered as a second-year course (COMP 2805) and was compulsory for all Computer Science students. Starting with the 2012/2013 academic year, the course has been downgraded to a third-year optional course (COMP 3803). We have been developing this book since we started teaching this course. Currently, we cover most of the material from Chapters 2?5 during a 12-week term with three hours of classes per week. The material from Chapter 6, on Complexity Theory, is taught in the third-year course COMP 3804 (Design and Analysis of Algorithms). In the early years of COMP 2805, we gave a two-lecture overview of Complexity Theory at the end of the term. Even though this overview has disappeared from the course, we decided to keep Chapter 6. This chapter has not been revised/modified for a long time.</description>
<size>1292372</size>
</item><item>
<title>SDO AIA SunInTime 2014-03-02</title>
<category>Dataset</category>
<infohash>7608b304c99635e3bbb350dd262631b1e2a124a4</infohash>
<guid>https://academictorrents.com/details/7608b304c99635e3bbb350dd262631b1e2a124a4</guid>
<link>https://academictorrents.com/details/7608b304c99635e3bbb350dd262631b1e2a124a4</link>
<description/>
<size>2953098603</size>
</item><item>
<title>SDO AIA SunInTime 2014-03-01</title>
<category>Dataset</category>
<infohash>e1dde8282232cfec1a49c40bdb062895cbe64963</infohash>
<guid>https://academictorrents.com/details/e1dde8282232cfec1a49c40bdb062895cbe64963</guid>
<link>https://academictorrents.com/details/e1dde8282232cfec1a49c40bdb062895cbe64963</link>
<description/>
<size>2951717304</size>
</item><item>
<title>Viking MDIM2.1 Colorized Global Mosaic 232m</title>
<category>Dataset</category>
<infohash>c746fd3441d19772627fd36599dc418241d39452</infohash>
<guid>https://academictorrents.com/details/c746fd3441d19772627fd36599dc418241d39452</guid>
<link>https://academictorrents.com/details/c746fd3441d19772627fd36599dc418241d39452</link>
<description>This global image map of Mars has a resolution of 256 pixels/degree (scale approximately 231 m/pixel at the equator). The colorized mosaic was completed by NASA AMES which warped the original Viking colorized mosaic and blended over the lastest black/white mosaic. This mosaic, known as Colorized Mars Digital Image Model (MDIM) 2.1. The original MDIM2.1, replaces two earlier mosaics produced by the USGS from the same set of approximately 4600 Viking Orbiter images. The positional accuracy of features in MDIM 2.1 is estimated to be roughly one pixel (200 m), compared to 3 km for MDIM 2.0 released in 2001 and &gt;6 km for MDIM 1.0 released in 1991. In addition to relatively imprecise geodetic control, the previous mosaics were affected by changing definitions of cartographic parameters (such as the definition of zero longitude), resulting in an overall longitude shift of as much as 0.2° between the early MDIMs and other datasets. The new mosaic uses the most recent coordinate system definitions for Mars. These definitions have been widely adopted by NASA missions and other users of planetary data and are likely to remain in use for a decade or more. As a result, MDIM 2.1 not only registers precisely with data from current missions such as MGS and 2001 Mars Odyssey but will serve as an accurate basemap on which data from future missions can be plotted. The basis for the positional accuracy of MDIM 2.1 is the incorporation of all images in the mosaic into the evolving USGS/RAND global control network of Mars. The primary reason for the greatly improved absolute accuracy of the current version of this network is the incorporation of 1232 globally distributed "ground control points" whose latitude and longitude were constrained to values measured from Mars Orbiter Laser Altimeter (MOLA) data. The globally adjusted MOLA dataset has an absolute horizontal accuracy on the order of 100 m, but individual features in images can probably only be tied to MOLA-derived shaded-relief digital image models with a precision on the order of 200 m. Other, lesser contributors to the accuracy of the control solution and mosaic are the use of MOLA-derived elevations for all 37,652 control points, use of updated timing and orientation data for the Viking Orbiter spacecraft, improved measurements of reseau locations in the images leading to more accurate correction of image distortions, and careful checking and re-measurement of control points with large solution residuals. The mosaic is also orthorectified based on the MOLA elevation data, so that parallax distortions present in the earlier versions are eliminated. The root-mean-squared (RMS) error of the control solution is 16 micrometers (1.4 Viking image pixels, or ~300 m on the ground). Visual inspection of the mosaic indicates that both image-to-image seam mismatches and image-to-MOLA registration errors are less than one pixel almost everywhere, with maximum errors on the order of 4 pixels (1 km) occurring in only a few locations. The cartographic constants used in MDIM 2.1 are those adopted by the IAU/IAG in 2000, which have been adopted by the majority of Mars missions and instrument teams. Coordinates (e.g., of the boundaries and centers of the individual files or map quadrangles making up the mosaic) are given in terms of east longitude and planetocentric latitude. The files in cylindrical (Equirectangular) map projection are also constructed so that lines of the map raster are equally spaced in planetocentric latitude. These files will thus register with other datasets based on planetocentric latitude either as-is or after a simple change of scale, but must be resampled in order to register to datasets based on planetographic latitude. The global mosaic is divided into 30 regions based on the USGS Mars Chart (MC) series of 1:5,000,000-scale printed maps. All regions are available in Equirectangular projection, which is a generalization of the more familiar Simple Cylindrical projection. Quadrangles 2-29 are provided only in Equirectangular, with center latitude of projection 0° this projection is identical to Simple Cylindrical. The polar quadrangles 1 and 30 are available in two Equirectangular sections with center latitudes of projection ±60° and ±75.52248781° respectively, and also as a single file in Polar Stereographic projection. The two Equirectangular sections of the polar quadrangles can be converted to center latitude of projection 0° (or equivalently to Simple Cylindrical projection) by 2:1 and 4:1 enlargement in the sample direction, respectively, after which they can be merged with the lower latitude data. The images used to make MDIM 2.1 were obtained primarily through the red, clear, and minus-blue filters of the Viking Orbiter imaging system, and thus provide a monochromatic view of Mars weighted toward the red end of the visible spectrum. Images were obtained with a wide range of solar incidence angles. It is unfortunately not possible to correct the appearance of both albedo (reflectivity) variations and topographic features for these incidence angle variations simultaneously. The images have therefore been highpass-filtered at a scale of ~50 km to remove regional albedo variations and then normalized so that equal topographic slopes appear with equal contrast everywhere. Photometric processing for MDIM 2.1 incorporates a model of the transmission and scattering of light in the atmosphere that is substantially improved over that used in MDIM 2.0. Residual tonal mismatches between different images after photometric correction were corrected based on a least-squares adjustment of image brightness and contrast. Because of these photometric and cosmetic improvements, it was possible to use a less severe highpass filter than for MDIM 2.0, improving the overall appearance of the mosaic. References available from: http://astrogeology.usgs.gov/maps/mdim-2-1</description>
<size>12740265271</size>
</item><item>
<title>Purely P2P Crypto-Currency With Finite Mini-Blockchain</title>
<category>Paper</category>
<infohash>4cc072da6bd32eedfec13e235a22ea4054e50554</infohash>
<guid>https://academictorrents.com/details/4cc072da6bd32eedfec13e235a22ea4054e50554</guid>
<link>https://academictorrents.com/details/4cc072da6bd32eedfec13e235a22ea4054e50554</link>
<description>Almost all P2P crypto-currencies prevent double spending and similar such attacks with a bulky "blockchain" scheme, and the ones which do not typically use some sort of pseudo-centralized solution to manage the transactions. Here I propose a purely P2P crypto-currency scheme with a finite blockchain, dubbed the "mini-blockchain". Each time a new block is solved the oldest block is trimmed from the end of the mini-blockchain so that it always has the same number of blocks. It is argued that the loss of security this trimming process incurs can be solved with a small "proof chain" and the loss of coin ownership data is solved with a database which holds the balance of all non-empty addresses, dubbed the "account tree". The proof chain secures the mini-blockchain and the mini-blockchain secures the account tree. This paper will describe the way in which these three mechanisms can work together to form a system which provides a high level of integrity and security, yet is much slimmer than all other purely P2P currencies. It also offers other potential benefits such as faster transactions and lower fees, quicker network synchronization, support for high levels of traffic, more block space for custom messages, and increased anonymity.</description>
<size>208371</size>
</item><item>
<title>Analysis of the Cryptocurrency Marketplace</title>
<category>Paper</category>
<infohash>daaa86689c42e78c4111b74984d5036a426f6cf6</infohash>
<guid>https://academictorrents.com/details/daaa86689c42e78c4111b74984d5036a426f6cf6</guid>
<link>https://academictorrents.com/details/daaa86689c42e78c4111b74984d5036a426f6cf6</link>
<description>This paper will go over the technical, economic, and social impact of cryptocurrencies such as Bitcoin and Litecoin. This document will go into a comprehensive level of detail about cryptocurrency technologies and protocols, as this is required to familiarize the reader with the principles behind the rapidly emerging open source economic ecosystem. Furthermore, emerging attack vectors of cryptocurrencies will be discussed, such as custom malware campaigns and targeted exploitation.</description>
<size>1977999</size>
</item><item>
<title>Primecoin: Cryptocurrency with Prime Number Proof-of-Work</title>
<category>Paper</category>
<infohash>d0f9accaec8ac9d538fdf9d675105ae1392ea32b</infohash>
<guid>https://academictorrents.com/details/d0f9accaec8ac9d538fdf9d675105ae1392ea32b</guid>
<link>https://academictorrents.com/details/d0f9accaec8ac9d538fdf9d675105ae1392ea32b</link>
<description>A new type of proof-of-work based on searching for prime numbers is introduced in peer-to-peer cryptocurrency designs. Three types of prime chains known as Cunningham chain of first kind, Cunningham chain of second kind and bi-twin chain are qualified as proof-of-work. Prime chain is linked to block hash to preserve the security property of Nakamoto s Bitcoin, while a continuous difficulty evaluation scheme is designed to allow prime chain to act as adjustable-difficulty proof-of-work in a Bitcoin like cryptocurrency.</description>
<size>247651</size>
</item><item>
<title>PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake</title>
<category>Paper</category>
<infohash>0bc8878760a3105617da3fa9ba6b97cffad6c24f</infohash>
<guid>https://academictorrents.com/details/0bc8878760a3105617da3fa9ba6b97cffad6c24f</guid>
<link>https://academictorrents.com/details/0bc8878760a3105617da3fa9ba6b97cffad6c24f</link>
<description>A peer-to-peer crypto-currency design derived from Satoshi Nakamoto s Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energy efficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin s but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism.</description>
<size>41972</size>
</item><item>
<title>Bitcoin: A Peer-to-Peer Electronic Cash System</title>
<category>Paper</category>
<infohash>8c271f4d2e92a3449e2d1bde633cd49f64af888f</infohash>
<guid>https://academictorrents.com/details/8c271f4d2e92a3449e2d1bde633cd49f64af888f</guid>
<link>https://academictorrents.com/details/8c271f4d2e92a3449e2d1bde633cd49f64af888f</link>
<description>A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they ll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.</description>
<size>184292</size>
</item><item>
<title>Chabot Simon - la modelisation des vagues</title>
<category>Paper</category>
<infohash>9a225c6da528dcfb80656be75bdeae2cdafd7c5b</infohash>
<guid>https://academictorrents.com/details/9a225c6da528dcfb80656be75bdeae2cdafd7c5b</guid>
<link>https://academictorrents.com/details/9a225c6da528dcfb80656be75bdeae2cdafd7c5b</link>
<description>La modélisation mathématique des mouvements d un fluide a été de tous temps un sujet de recherche actif. Quels sont aujourd hui les moyens utilisés pour rendre numériquement compte de ces mouvements ? Quels sont les coûts de calculs et comment peut on les améliorer ? Tant de questions auxquelles ce document tente de répondre en présentant quelques méthodes mathématiques utilisées par l animation et le génie civil.</description>
<size>2280289</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #271. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>7c1c05819a273ecebfe360b9b1b51ed8333828d4</infohash>
<guid>https://academictorrents.com/details/7c1c05819a273ecebfe360b9b1b51ed8333828d4</guid>
<link>https://academictorrents.com/details/7c1c05819a273ecebfe360b9b1b51ed8333828d4</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #264. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>ffcb84873d60b6147d095f9391b3f7ed46823bd5</infohash>
<guid>https://academictorrents.com/details/ffcb84873d60b6147d095f9391b3f7ed46823bd5</guid>
<link>https://academictorrents.com/details/ffcb84873d60b6147d095f9391b3f7ed46823bd5</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #263. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>e1d640450108b0fd9acaaa7afd210750597074fd</infohash>
<guid>https://academictorrents.com/details/e1d640450108b0fd9acaaa7afd210750597074fd</guid>
<link>https://academictorrents.com/details/e1d640450108b0fd9acaaa7afd210750597074fd</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #262. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>97905af485f7c2ab1593943b0b70771608e619b6</infohash>
<guid>https://academictorrents.com/details/97905af485f7c2ab1593943b0b70771608e619b6</guid>
<link>https://academictorrents.com/details/97905af485f7c2ab1593943b0b70771608e619b6</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #261. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>646ecf450e7c24f566f77efdd468a7c1142becb1</infohash>
<guid>https://academictorrents.com/details/646ecf450e7c24f566f77efdd468a7c1142becb1</guid>
<link>https://academictorrents.com/details/646ecf450e7c24f566f77efdd468a7c1142becb1</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #260. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>8654e3e1bb431a0a232756b442691b10fbb4c295</infohash>
<guid>https://academictorrents.com/details/8654e3e1bb431a0a232756b442691b10fbb4c295</guid>
<link>https://academictorrents.com/details/8654e3e1bb431a0a232756b442691b10fbb4c295</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #259. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>2568d3966c40c6c9843cbc1776d25d66b5e486a1</infohash>
<guid>https://academictorrents.com/details/2568d3966c40c6c9843cbc1776d25d66b5e486a1</guid>
<link>https://academictorrents.com/details/2568d3966c40c6c9843cbc1776d25d66b5e486a1</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #258. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>a367138572a68ea5b7542cb42f06913e8dcf5530</infohash>
<guid>https://academictorrents.com/details/a367138572a68ea5b7542cb42f06913e8dcf5530</guid>
<link>https://academictorrents.com/details/a367138572a68ea5b7542cb42f06913e8dcf5530</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #257. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>974b50ccc93b630097927f31764341c26023762c</infohash>
<guid>https://academictorrents.com/details/974b50ccc93b630097927f31764341c26023762c</guid>
<link>https://academictorrents.com/details/974b50ccc93b630097927f31764341c26023762c</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #256. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>8589b76494cad0527952d9c90b1a1aaf9d176e92</infohash>
<guid>https://academictorrents.com/details/8589b76494cad0527952d9c90b1a1aaf9d176e92</guid>
<link>https://academictorrents.com/details/8589b76494cad0527952d9c90b1a1aaf9d176e92</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #255. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>35135f501d7de7161c9b2b85d38cc4252d890e35</infohash>
<guid>https://academictorrents.com/details/35135f501d7de7161c9b2b85d38cc4252d890e35</guid>
<link>https://academictorrents.com/details/35135f501d7de7161c9b2b85d38cc4252d890e35</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #254. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>c34172b563bad50821bdf670949e8b12d5f35373</infohash>
<guid>https://academictorrents.com/details/c34172b563bad50821bdf670949e8b12d5f35373</guid>
<link>https://academictorrents.com/details/c34172b563bad50821bdf670949e8b12d5f35373</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #253. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>15a38ec1094dfff470f859e5148d691346f70840</infohash>
<guid>https://academictorrents.com/details/15a38ec1094dfff470f859e5148d691346f70840</guid>
<link>https://academictorrents.com/details/15a38ec1094dfff470f859e5148d691346f70840</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #252. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>98903999ffb01bce9346768f0130b6d90453702e</infohash>
<guid>https://academictorrents.com/details/98903999ffb01bce9346768f0130b6d90453702e</guid>
<link>https://academictorrents.com/details/98903999ffb01bce9346768f0130b6d90453702e</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #251. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>dd43c374259164d46709818342f7a96353be6f36</infohash>
<guid>https://academictorrents.com/details/dd43c374259164d46709818342f7a96353be6f36</guid>
<link>https://academictorrents.com/details/dd43c374259164d46709818342f7a96353be6f36</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #250. Type: Velocity W Field</title>
<category>Dataset</category>
<infohash>2972ba7c4af6d7cafc44c6ae483cdbfc20856249</infohash>
<guid>https://academictorrents.com/details/2972ba7c4af6d7cafc44c6ae483cdbfc20856249</guid>
<link>https://academictorrents.com/details/2972ba7c4af6d7cafc44c6ae483cdbfc20856249</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #271. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>6329e5ff0beaa866d7dc076e6d3f64b82a13b691</infohash>
<guid>https://academictorrents.com/details/6329e5ff0beaa866d7dc076e6d3f64b82a13b691</guid>
<link>https://academictorrents.com/details/6329e5ff0beaa866d7dc076e6d3f64b82a13b691</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #264. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>0293c7de363b54efb095145e5ce69c6897a272ba</infohash>
<guid>https://academictorrents.com/details/0293c7de363b54efb095145e5ce69c6897a272ba</guid>
<link>https://academictorrents.com/details/0293c7de363b54efb095145e5ce69c6897a272ba</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #263. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>c0059c7d9785c2ba16d363b39eeedbcaf389c8d3</infohash>
<guid>https://academictorrents.com/details/c0059c7d9785c2ba16d363b39eeedbcaf389c8d3</guid>
<link>https://academictorrents.com/details/c0059c7d9785c2ba16d363b39eeedbcaf389c8d3</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #262. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>bf40dc4e2b76bfd4e2a931e385a85a4f061e9924</infohash>
<guid>https://academictorrents.com/details/bf40dc4e2b76bfd4e2a931e385a85a4f061e9924</guid>
<link>https://academictorrents.com/details/bf40dc4e2b76bfd4e2a931e385a85a4f061e9924</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #261. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>6351bbafd4f94c5822b5ef8b5cd6d56631c7a42f</infohash>
<guid>https://academictorrents.com/details/6351bbafd4f94c5822b5ef8b5cd6d56631c7a42f</guid>
<link>https://academictorrents.com/details/6351bbafd4f94c5822b5ef8b5cd6d56631c7a42f</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #260. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>428a99895c8ccf1406bde2b144200d467951564c</infohash>
<guid>https://academictorrents.com/details/428a99895c8ccf1406bde2b144200d467951564c</guid>
<link>https://academictorrents.com/details/428a99895c8ccf1406bde2b144200d467951564c</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #259. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>e53fab410a750471c86c26275b9363b0336d2d11</infohash>
<guid>https://academictorrents.com/details/e53fab410a750471c86c26275b9363b0336d2d11</guid>
<link>https://academictorrents.com/details/e53fab410a750471c86c26275b9363b0336d2d11</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #258. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>103812a5af0d9b62bb1890c4183749e5d5287b15</infohash>
<guid>https://academictorrents.com/details/103812a5af0d9b62bb1890c4183749e5d5287b15</guid>
<link>https://academictorrents.com/details/103812a5af0d9b62bb1890c4183749e5d5287b15</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #257. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>d0aa4f7aeb4253bd99c21254c13d4a68143a57c4</infohash>
<guid>https://academictorrents.com/details/d0aa4f7aeb4253bd99c21254c13d4a68143a57c4</guid>
<link>https://academictorrents.com/details/d0aa4f7aeb4253bd99c21254c13d4a68143a57c4</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #256. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>0afe224525f51765119cda62c9f1eab186bb1f93</infohash>
<guid>https://academictorrents.com/details/0afe224525f51765119cda62c9f1eab186bb1f93</guid>
<link>https://academictorrents.com/details/0afe224525f51765119cda62c9f1eab186bb1f93</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #255. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>0ac19e4fcc8d9ace35ae76eed298bdda07662546</infohash>
<guid>https://academictorrents.com/details/0ac19e4fcc8d9ace35ae76eed298bdda07662546</guid>
<link>https://academictorrents.com/details/0ac19e4fcc8d9ace35ae76eed298bdda07662546</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #254. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>0bd7dd3dd4b4da7e7270a37b8b2b9c7768b4b960</infohash>
<guid>https://academictorrents.com/details/0bd7dd3dd4b4da7e7270a37b8b2b9c7768b4b960</guid>
<link>https://academictorrents.com/details/0bd7dd3dd4b4da7e7270a37b8b2b9c7768b4b960</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #253. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>3d4fa715353dfc605a7d925db646c260cdbfe757</infohash>
<guid>https://academictorrents.com/details/3d4fa715353dfc605a7d925db646c260cdbfe757</guid>
<link>https://academictorrents.com/details/3d4fa715353dfc605a7d925db646c260cdbfe757</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #252. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>007a490c7b36c823d25c8495ddb5ba9ab3165139</infohash>
<guid>https://academictorrents.com/details/007a490c7b36c823d25c8495ddb5ba9ab3165139</guid>
<link>https://academictorrents.com/details/007a490c7b36c823d25c8495ddb5ba9ab3165139</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #251. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>b48983987a8ffb0ba740fe936eb6d2752e7653cc</infohash>
<guid>https://academictorrents.com/details/b48983987a8ffb0ba740fe936eb6d2752e7653cc</guid>
<link>https://academictorrents.com/details/b48983987a8ffb0ba740fe936eb6d2752e7653cc</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #250. Type: Velocity V Field</title>
<category>Dataset</category>
<infohash>eb33fbf2fa6f794b189a99e5ef5af1d44d6fc6b6</infohash>
<guid>https://academictorrents.com/details/eb33fbf2fa6f794b189a99e5ef5af1d44d6fc6b6</guid>
<link>https://academictorrents.com/details/eb33fbf2fa6f794b189a99e5ef5af1d44d6fc6b6</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #271. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>a7fa5a8f1c752d1fddacbfe8d1d566f0febf6818</infohash>
<guid>https://academictorrents.com/details/a7fa5a8f1c752d1fddacbfe8d1d566f0febf6818</guid>
<link>https://academictorrents.com/details/a7fa5a8f1c752d1fddacbfe8d1d566f0febf6818</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #264. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>263fc36ee6ac1905d8f737e7c2c2a9639b4d100c</infohash>
<guid>https://academictorrents.com/details/263fc36ee6ac1905d8f737e7c2c2a9639b4d100c</guid>
<link>https://academictorrents.com/details/263fc36ee6ac1905d8f737e7c2c2a9639b4d100c</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #263. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>56708cd8f232ba69da7a830b6c7ba211fa553a39</infohash>
<guid>https://academictorrents.com/details/56708cd8f232ba69da7a830b6c7ba211fa553a39</guid>
<link>https://academictorrents.com/details/56708cd8f232ba69da7a830b6c7ba211fa553a39</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #262. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>47c9c41ec5ad40d0f12f6543fcb860fdd8969278</infohash>
<guid>https://academictorrents.com/details/47c9c41ec5ad40d0f12f6543fcb860fdd8969278</guid>
<link>https://academictorrents.com/details/47c9c41ec5ad40d0f12f6543fcb860fdd8969278</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #261. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>257960f1fec0c2546dd5d53d72d0d0df9d29cc71</infohash>
<guid>https://academictorrents.com/details/257960f1fec0c2546dd5d53d72d0d0df9d29cc71</guid>
<link>https://academictorrents.com/details/257960f1fec0c2546dd5d53d72d0d0df9d29cc71</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #260. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>21f3b1ca982e752909a851ff02e414aafbede5b4</infohash>
<guid>https://academictorrents.com/details/21f3b1ca982e752909a851ff02e414aafbede5b4</guid>
<link>https://academictorrents.com/details/21f3b1ca982e752909a851ff02e414aafbede5b4</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #259. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>9e4daf0d5b7797eb13ef3707ce145d1a587db247</infohash>
<guid>https://academictorrents.com/details/9e4daf0d5b7797eb13ef3707ce145d1a587db247</guid>
<link>https://academictorrents.com/details/9e4daf0d5b7797eb13ef3707ce145d1a587db247</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #258. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>12b616e8ee29b7d170f944123eecc7a20520a996</infohash>
<guid>https://academictorrents.com/details/12b616e8ee29b7d170f944123eecc7a20520a996</guid>
<link>https://academictorrents.com/details/12b616e8ee29b7d170f944123eecc7a20520a996</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #257. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>87b83c99801f5ccad628bd5affb876f45ef3e533</infohash>
<guid>https://academictorrents.com/details/87b83c99801f5ccad628bd5affb876f45ef3e533</guid>
<link>https://academictorrents.com/details/87b83c99801f5ccad628bd5affb876f45ef3e533</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #256. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>820106cfc060cca35d5727f8465ad10d80335e1c</infohash>
<guid>https://academictorrents.com/details/820106cfc060cca35d5727f8465ad10d80335e1c</guid>
<link>https://academictorrents.com/details/820106cfc060cca35d5727f8465ad10d80335e1c</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #255. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>efbaf78b5ee9d687de0a0f20efafa63f9ab63ee9</infohash>
<guid>https://academictorrents.com/details/efbaf78b5ee9d687de0a0f20efafa63f9ab63ee9</guid>
<link>https://academictorrents.com/details/efbaf78b5ee9d687de0a0f20efafa63f9ab63ee9</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #254. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>983d3c78d7dc7ad1b4c7f281292180457083d43a</infohash>
<guid>https://academictorrents.com/details/983d3c78d7dc7ad1b4c7f281292180457083d43a</guid>
<link>https://academictorrents.com/details/983d3c78d7dc7ad1b4c7f281292180457083d43a</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #253. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>3f03be9e3c45dde56b17b9f7c06cf2852959f8ad</infohash>
<guid>https://academictorrents.com/details/3f03be9e3c45dde56b17b9f7c06cf2852959f8ad</guid>
<link>https://academictorrents.com/details/3f03be9e3c45dde56b17b9f7c06cf2852959f8ad</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #252. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>a9196d64726c8dca10b3ab35381fbb09d128f488</infohash>
<guid>https://academictorrents.com/details/a9196d64726c8dca10b3ab35381fbb09d128f488</guid>
<link>https://academictorrents.com/details/a9196d64726c8dca10b3ab35381fbb09d128f488</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #251. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>934a42a881bc55e83864abe1c730df9330785af3</infohash>
<guid>https://academictorrents.com/details/934a42a881bc55e83864abe1c730df9330785af3</guid>
<link>https://academictorrents.com/details/934a42a881bc55e83864abe1c730df9330785af3</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #250. Type: Velocity U Field</title>
<category>Dataset</category>
<infohash>0841d02e71fd6712c6b29ede3e750ead619c55fa</infohash>
<guid>https://academictorrents.com/details/0841d02e71fd6712c6b29ede3e750ead619c55fa</guid>
<link>https://academictorrents.com/details/0841d02e71fd6712c6b29ede3e750ead619c55fa</link>
<description/>
<size>89909797976</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #271. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>e41c619ad3f4c8f32ff80a1435c8c3d07a2bfa92</infohash>
<guid>https://academictorrents.com/details/e41c619ad3f4c8f32ff80a1435c8c3d07a2bfa92</guid>
<link>https://academictorrents.com/details/e41c619ad3f4c8f32ff80a1435c8c3d07a2bfa92</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #264. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>c2aba36e8b51caafa2b94edd461ddd011c13910f</infohash>
<guid>https://academictorrents.com/details/c2aba36e8b51caafa2b94edd461ddd011c13910f</guid>
<link>https://academictorrents.com/details/c2aba36e8b51caafa2b94edd461ddd011c13910f</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #263. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>f52bf1135b754c889c4e58720dff357d7bbf19a0</infohash>
<guid>https://academictorrents.com/details/f52bf1135b754c889c4e58720dff357d7bbf19a0</guid>
<link>https://academictorrents.com/details/f52bf1135b754c889c4e58720dff357d7bbf19a0</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #262. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>5a7b0d82dce1ada8430456ed52cdcbf77bb95c19</infohash>
<guid>https://academictorrents.com/details/5a7b0d82dce1ada8430456ed52cdcbf77bb95c19</guid>
<link>https://academictorrents.com/details/5a7b0d82dce1ada8430456ed52cdcbf77bb95c19</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #261. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>8d880d2f11cca301861847a046d9a3e7079a9112</infohash>
<guid>https://academictorrents.com/details/8d880d2f11cca301861847a046d9a3e7079a9112</guid>
<link>https://academictorrents.com/details/8d880d2f11cca301861847a046d9a3e7079a9112</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #260. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>c52bf8dab94074d8730477d690d8ea5725255868</infohash>
<guid>https://academictorrents.com/details/c52bf8dab94074d8730477d690d8ea5725255868</guid>
<link>https://academictorrents.com/details/c52bf8dab94074d8730477d690d8ea5725255868</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #259. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>ab28f0c640cf6b1b0943254ab0cd77202d2e4cbb</infohash>
<guid>https://academictorrents.com/details/ab28f0c640cf6b1b0943254ab0cd77202d2e4cbb</guid>
<link>https://academictorrents.com/details/ab28f0c640cf6b1b0943254ab0cd77202d2e4cbb</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #258. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>431333f5cf7e04447acbb50427e4c0f8e6d37fc0</infohash>
<guid>https://academictorrents.com/details/431333f5cf7e04447acbb50427e4c0f8e6d37fc0</guid>
<link>https://academictorrents.com/details/431333f5cf7e04447acbb50427e4c0f8e6d37fc0</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #257. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>d15933d4e7f49f6f3827609ceb795ed97784f54e</infohash>
<guid>https://academictorrents.com/details/d15933d4e7f49f6f3827609ceb795ed97784f54e</guid>
<link>https://academictorrents.com/details/d15933d4e7f49f6f3827609ceb795ed97784f54e</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #256. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>83608ad6c21a9c99a1fa2ac656fdc98dcb053439</infohash>
<guid>https://academictorrents.com/details/83608ad6c21a9c99a1fa2ac656fdc98dcb053439</guid>
<link>https://academictorrents.com/details/83608ad6c21a9c99a1fa2ac656fdc98dcb053439</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #255. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>91f92ac30472c2db9610402a72d42fc4b5641467</infohash>
<guid>https://academictorrents.com/details/91f92ac30472c2db9610402a72d42fc4b5641467</guid>
<link>https://academictorrents.com/details/91f92ac30472c2db9610402a72d42fc4b5641467</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #254. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>e9666ad463bd824722bf28e6effd92e331c4633c</infohash>
<guid>https://academictorrents.com/details/e9666ad463bd824722bf28e6effd92e331c4633c</guid>
<link>https://academictorrents.com/details/e9666ad463bd824722bf28e6effd92e331c4633c</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #253. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>65027504d448262ab855bbe9026eac956e1135a6</infohash>
<guid>https://academictorrents.com/details/65027504d448262ab855bbe9026eac956e1135a6</guid>
<link>https://academictorrents.com/details/65027504d448262ab855bbe9026eac956e1135a6</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #252. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>d1c5e73eb3b3f3af156793aaefa9482fdbe38823</infohash>
<guid>https://academictorrents.com/details/d1c5e73eb3b3f3af156793aaefa9482fdbe38823</guid>
<link>https://academictorrents.com/details/d1c5e73eb3b3f3af156793aaefa9482fdbe38823</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #251. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>c4e867d27ac89ef38856c3f65a803b8c947ce5f4</infohash>
<guid>https://academictorrents.com/details/c4e867d27ac89ef38856c3f65a803b8c947ce5f4</guid>
<link>https://academictorrents.com/details/c4e867d27ac89ef38856c3f65a803b8c947ce5f4</link>
<description/>
<size>89742055856</size>
</item><item>
<title>ZPG-TBL Re=2800-6650: Snapshot #250. Type: Pressure Field</title>
<category>Dataset</category>
<infohash>9dcd6eff4e2b1e7ec99f958d0d67ff6a8256b73a</infohash>
<guid>https://academictorrents.com/details/9dcd6eff4e2b1e7ec99f958d0d67ff6a8256b73a</guid>
<link>https://academictorrents.com/details/9dcd6eff4e2b1e7ec99f958d0d67ff6a8256b73a</link>
<description/>
<size>89742055856</size>
</item><item>
<title>Characterizations of anthropogenic  disturbance patterns in the boreal mixedwood forests of Alberta, Canada</title>
<category>Dataset</category>
<infohash>85d0cb575d2a4720158d06f68d424fbcd5a0e663</infohash>
<guid>https://academictorrents.com/details/85d0cb575d2a4720158d06f68d424fbcd5a0e663</guid>
<link>https://academictorrents.com/details/85d0cb575d2a4720158d06f68d424fbcd5a0e663</link>
<description>These spatially-explicit data were generated from analysis of 15,431 disturbed features  (e.g., roads, pipelines, seismic lines, harvest blocks, well sites) around the Oil Sands  using the NEPTUNE (Novel Emulation Pattern Tool for Understanding Natural Events) web-tool (www.foothillsri.ca/resource/neptune) in a Geographic Information System (GIS). The  data are in ESRI ArcMap shapefile (.shp) format and are compressed in zip file (.zip)  format. Metadata are included in the metadata.txt file. These data may be cited as: Pickell, P. D., D. W. Andison, N. C. Coops. 2013. Characterizations of anthropogenic disturbance patterns in the boreal mixedwood forests of Alberta, Canada. Forest Ecology and Management. doi: 10.1016/j.foreco.2013.04.031</description>
<size>10821997</size>
</item><item>
<title>To introduce computer science in one day: The Throw platform</title>
<category>Paper</category>
<infohash>dce77c0199e3a98f81caa7bd204f8efade63732a</infohash>
<guid>https://academictorrents.com/details/dce77c0199e3a98f81caa7bd204f8efade63732a</guid>
<link>https://academictorrents.com/details/dce77c0199e3a98f81caa7bd204f8efade63732a</link>
<description>This paper presents an open source platform Throw [1] that is intended to be used for a one day event to increase interest in computer science as well as equip students with tools for self exploration. It is based on three years of hosting a one day event aimed at introducing computer science to 5-8th grade students. One major challenge in computer science education is fostering a commitment to learn, that is powerful and engaging enough for a student to be able to utilize learned skills to turn their ideas into reality and drive them to pursue computing. We present a theory for what is necessary for an effective introduction to computer science as well as present survey results from the first trial of the Throw platform which encompasses this theory.</description>
<size>690075</size>
</item><item>
<title>Management of acute and post-operative pain in chronic kidney disease</title>
<category>Paper</category>
<infohash>f92f4798efd078afe1708efb74a3816a66a23104</infohash>
<guid>https://academictorrents.com/details/f92f4798efd078afe1708efb74a3816a66a23104</guid>
<link>https://academictorrents.com/details/f92f4798efd078afe1708efb74a3816a66a23104</link>
<description>Chronic kidney disease is common and patients with many co-morbid conditions frequently have to undergo surgical procedures and, therefore, require effective pain management. The pharmacokinetics of various analgesic agents are not well studied in patients with chronic kidney disease and the risk of accumulation of the main drug or their metabolites, resulting in serious adverse events, is a common scenario on medical and surgical wards. It is common for these patients to be cared for by  non-nephrologists  who often prescribe the standard dose of the commonly used analgesics, without taking into consideration the patient s kidney function. It is important to recognize the problems and complications associated with the use of standard doses of analgesics, and highlight the importance of adjusting analgesic dosage based on kidney function to avoid complications while still providing adequate pain relief.</description>
<size>571347</size>
</item><item>
<title>ThermalMapper project by the Jacobs University Bremen - Outdoor Scans</title>
<category>Dataset</category>
<infohash>c83312afcc5d4937be92191f04eae917dfa77f1d</infohash>
<guid>https://academictorrents.com/details/c83312afcc5d4937be92191f04eae917dfa77f1d</guid>
<link>https://academictorrents.com/details/c83312afcc5d4937be92191f04eae917dfa77f1d</link>
<description>This data set was recorded as part of the ThermalMapper project by the Jacobs University Bremen. It contains scans and thermal images recorded at several poses. At each pose 9 thermal images and 9 scans of 40 degrees are taken thus covering the full 360 degrees. Description of Files: &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- scanXXX.3d contains the data collected at scan position scanXXX.pose. The points from each scan part are attributed with the thermal information from the thermal images. The pose information is obtained from odometry and refined using slam6d (downloadable from http://threedtk.de). Each line corresponds to one point, containing x-, y-, z-coordinates, temperature in centigrades. 1. Unpack the files to thermodat. 2. View the result ./bin/show -s 0 -e XX -f uosr thermodat</description>
<size>500869837</size>
</item><item>
<title>ThermalMapper project by the Jacobs University Bremen - Indoor Scans</title>
<category>Dataset</category>
<infohash>c1f271934d73384a037578b27a079981363cc9c8</infohash>
<guid>https://academictorrents.com/details/c1f271934d73384a037578b27a079981363cc9c8</guid>
<link>https://academictorrents.com/details/c1f271934d73384a037578b27a079981363cc9c8</link>
<description>Riegl VZ-400 and a Optris PI IR camera This data set was recorded as part of the ThermalMapper project by the Jacobs University Bremen. It contains scans and thermal images recorded at several poses. At each pose 9 thermal images and 9 scans of 40 degrees are taken thus covering the full 360 degrees. Description of Files: &amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;- scanXXX.3d contains the data collected at scan position scanXXX.pose. The points from each scan part are attributed with the thermal information from the thermal images. The pose information is obtained from odometry and refined using slam6d (downloadable from http://threedtk.de). Each line corresponds to one point, containing x-, y-, z-coordinates, temperature in centigrades. 1. Unpack the files to thermodat. 2. View the result ./bin/show -s 0 -e XX -f uosr thermodat</description>
<size>1157677056</size>
</item><item>
<title>Introducing R</title>
<category>Paper</category>
<infohash>d430724be7ac00f4b5e7f0d956f8411ef9b67dbe</infohash>
<guid>https://academictorrents.com/details/d430724be7ac00f4b5e7f0d956f8411ef9b67dbe</guid>
<link>https://academictorrents.com/details/d430724be7ac00f4b5e7f0d956f8411ef9b67dbe</link>
<description>The purpose of these notes, an update of my 1992 handout Introducing S-Plus, is to provide a quick introduction to R, particularly as a tool for fitting linear and generalized linear models. Additional examples may be found in the R Logs section of my GLM course. ##1. Introduction R is a powerful environment for statistical computing which runs on several platforms. These notes are written specially for users running the Windows version, but most of the material applies to the Mac and Linux versions as well. ##1.1 The R Language and Environment R was first written as a research project by Ross Ihaka and Robert Gentleman, and is now under active development by a group of statisticians called  the R core team , with a home page at www.r-project.org. R was designed to be  not unlike  the S language developed by John Chambers and others at Bell Labs. A commercial version of S with additional features was developed and marketed as S-Plus by Statistical Sciences, which later became Insightful and is now TIBCO Spotfire. R and S-Plus can best be viewed as two implementations of the S language. R is available free of charge and is distributed under the terms of the Free Software Foundation s GNU General Public License. You can download the program from the Comprehensive R Archive Network (CRAN). Ready-to-run  binaries  are available for Windows, Mac OS X, and Linux. The source code is also available for download and can be compiled for other platforms. These notes are organized in several sections, as shown in the table of contents on the right. I have tried to introduce key features of R as they are needed by students in my statistics classes. As a result, I often postpone (or altogether omit) discussion of some of the more powerful features of R as a programming language. Notes of local interest, such as where to find R at Princeton University, appear in framed boxes and are labeled as such. Permission is hereby given to reproduce these pages freely and host them in your own server if you wish. You may add, edit or delete material in the local notes as long as the rest of the text is left unchanged and due credit is given. Obviously I welcome corrections and suggestions for enhancement. ##1.2 Bibliographic Remarks S was first introduced by Becker and Chambers (1984) in what s known as the  brown  book. The new S language was described by Becker, Chambers and Wilks (1988) in the  blue  book. Chambers and Hastie (1992) edited a book discussing statistical modeling in S, called the  white  book. The latest version of the S language is described by Chambers (1998) in the  green  book, but R is largely an implementation of the versions documented in the blue and white books. Chamber s (2008) latest book focuses on Programming with R. Venables and Ripley (1994, 1997, 1999, 2002) have written an excellent book on Modern Applied Statistics with S-PLUS that is now in its fourth edition. The latest edition is particularly useful to R users because the main text explains differences between S-Plus and R where relevant. A companion volume called S Programming appeared in 2000 and applies to both S-Plus and R. These authors have also made available in their website an extensive collection of complements to their books, follow the links at MASS 4. There is now an extensive and rapidly growing literature on R. Good introductions include the books by Krause and Olson (1987), Dalgaard (2002), and Braun and Murdoch (2007). Beginners will probably benefit from working through the examples in Everitt and Hothorn s (2006) A Handbook of Statistical Analyses Using R or Fox s (2002) companion to applied regression. Among more specialized books my favorites include Murrell (2005), an essential reference on R graphics, Pinheiro and Bates (2000), a book on mixed models, and Therneau and Grambsh s (2000) Modeling Survival Data, which includes many analyses using S-Plus as well as SAS. (Therneau wrote the survival analysis code used in S-Plus and R.) For additional references see the annotated list at R Books. The official R manuals are available as PDF files that come with the R distribution. These include An Introduction to R (a nice 100-page introduction), a manual on R Data Import/Export describing facilities for transferring data to and from other packages, and useful notes on R installation and Administration. More specialized documents include a draft of the R Language Definition, a guide to Writing R Extensions, documentation on R Internals including coding standards, and finally the massive R Reference Index (~3000 pages). The online help facility is excellent. When you install R you get a choice of various help formats. I recommend compiled html help because you get a nice tree-view of the contents, an index, a pretty decent search engine, and nicely formatted help pages. (On Unix you should probably choose html help.)</description>
<size>820994</size>
</item><item>
<title>Think Bayes - Bayesian Statistics Made Simple</title>
<category>Paper</category>
<infohash>7d6d7b7fdecaeb0652cfb9449d25976bf56adac4</infohash>
<guid>https://academictorrents.com/details/7d6d7b7fdecaeb0652cfb9449d25976bf56adac4</guid>
<link>https://academictorrents.com/details/7d6d7b7fdecaeb0652cfb9449d25976bf56adac4</link>
<description>Think Bayes is an introduction to Bayesian statistics using computational methods. The premise of this book, and the other books in the Think X series, is that if you know how to program, you can use that skill to learn other topics. Most books on Bayesian statistics use mathematical notation and present ideas in terms of mathematical concepts like calculus. This book uses Python code instead of math, and discrete approximations instead of continuous mathematics. As a result, what would be an integral in a math book becomes a summation, and most operations on probability distributions are simple loops. I think this presentation is easier to understand, at least for people with programming skills. It is also more general, because when we make modeling decisions, we can choose the most appropriate model without worrying too much about whether the model lends itself to conventional analysis. Also, it provides a smooth development path from simple examples to real-world problems. Think Bayes is a Free Book. It is available under the Creative Commons Attribution-NonCommercial 3.0 Unported License, which means that you are free to copy, distribute, and modify it, as long as you attribute the work and don t use it for commercial purposes.</description>
<size>2374789</size>
</item><item>
<title>Accelerometer-Based Event Detector for Low-Power Applications</title>
<category>Paper</category>
<infohash>4bee3aad417a4670079da4daf129d5e4708f61d4</infohash>
<guid>https://academictorrents.com/details/4bee3aad417a4670079da4daf129d5e4708f61d4</guid>
<link>https://academictorrents.com/details/4bee3aad417a4670079da4daf129d5e4708f61d4</link>
<description>In this paper, an adaptive, autocovariance-based event detection algorithm is proposed, which can be used with micro-electro-mechanical systems (MEMS) accelerometer sensors to build inexpensive and power efficient event detectors. The algorithm works well with low signal-to-noise ratio input signals, and its computational complexity is very low, allowing its utilization on inexpensive low-end embedded sensor devices. The proposed algorithm decreases its energy consumption by lowering its duty cycle, as much as the event to be detected allows it. The performance of the algorithm is tested and compared to the conventional filter-based approach. The comparison was performed in an application where illegal entering of vehicles into restricted areas was detected.</description>
<size>1452817</size>
</item><item>
<title>Efficient Accelerometer-based Event Detector in Wireless Sensor Networks</title>
<category>Paper</category>
<infohash>b0fc43009de3d358bfbd8a14ba99ca320b356bc5</infohash>
<guid>https://academictorrents.com/details/b0fc43009de3d358bfbd8a14ba99ca320b356bc5</guid>
<link>https://academictorrents.com/details/b0fc43009de3d358bfbd8a14ba99ca320b356bc5</link>
<description>In this paper an autocovariance-based event detector algorithm is proposed. The algorithm is able to detect events even if the measurements have poor signal-to-noise ratio, and its performance is independent of the characteristic of the input signal. An efficient implementation of the algorithm is also proposed, which allows the utilization of the algorithm on low-end devices, e.g. in wireless sensor networking nodes. The performance of the algorithm has been tested and compared to a conventional filter-based approach, in a vehicle detector application.</description>
<size>516709</size>
</item><item>
<title>New approach for modeling of transiting exoplanets for arbitrary limb-darkening law</title>
<category>Paper</category>
<infohash>bbf1c32bd69459b93742eb691bf11fc8961e6db7</infohash>
<guid>https://academictorrents.com/details/bbf1c32bd69459b93742eb691bf11fc8961e6db7</guid>
<link>https://academictorrents.com/details/bbf1c32bd69459b93742eb691bf11fc8961e6db7</link>
<description>We present a new solution of the direct problem of planet transits based on transformation of double integrals to single ones. On the basis of our direct problem solution we created the code TAC-maker for rapid and interactive calculation of synthetic planet transits by numerical computations of the integrals. The validation of our approach was made by comparison with the results of the wide-spread Mandel &amp; Agol (2002) method for the cases of linear, quadratic and squared root limb-darkening laws and various combinations of model parameters. For the first time our approach allows the use of arbitrary limb-darkening law of the host star. This advantage together with the practically arbitrary precision of the calculations make the code a valuable tool that faces the challenges of the continuously increasing photometric precision of the ground-based and space observations.</description>
<size>3576941</size>
</item><item>
<title>Democratic Development by Larry Diamond</title>
<category>Course</category>
<infohash>1452d6a906d5d164c1adfc4b81a74a6690a8ec4b</infohash>
<guid>https://academictorrents.com/details/1452d6a906d5d164c1adfc4b81a74a6690a8ec4b</guid>
<link>https://academictorrents.com/details/1452d6a906d5d164c1adfc4b81a74a6690a8ec4b</link>
<description>About the Course Democratic Development is intended as a broad, introductory survey of the political, social, cultural, economic, institutional, and international factors that foster or obstruct the development, and consolidation, of democracy. Topics will be examined in historical and comparative perspective, and reference a variety of different national experiences. It is hoped that students in developing or prospective democracies will use the theories, ideas, and lessons in the class to help build or improve democracy in their own countries. This course is primarily intended for individuals in college or beyond, with some academic background or preparation in political science or the social sciences. However, it seeks to be accessible and useful to a diverse international audience, including educators at the secondary and college levels, government officials, development professionals, civil society leaders, journalists, bloggers, activists, and individuals involved in a wide range of activities and professions related to the development and deepening of democracy. Course Syllabus Week 1 Introduction to the Course, Why Democracy? What Is Democracy? Regime Types The Third Wave of Democratization and its Ebb Week 2 Legitimacy, Authority and Effectiveness Democratic Consolidation Week 3 Political Culture and Democracy Are Democratic Values Universal? Week 4 Economic Development Class Structure and Inequality Civil Society Week 5 Democratic Transition: Paths and Drivers Democratic Transition: Types and Means Week 6 Constitutional Design Presidential vs. Parliamentary Government Parties and Party Systems Week 7 Electoral Systems Choosing between Different Systems Week 8 Ethnicity and Ethnic Conflict Managing Ethnic Conflict Federalism Week 9 Horizontal Accountability and the Rule of Law Controlling Corruption Democratic Breakdowns Week 10 International Factors Promoting Democracy Week 11 The Future of Democracy</description>
<size>3691712674</size>
</item><item>
<title>freefield1010 - an open dataset for research on audio field recording archives</title>
<category>Dataset</category>
<infohash>d247b92fa7b606e0914367c0839365499dd20121</infohash>
<guid>https://academictorrents.com/details/d247b92fa7b606e0914367c0839365499dd20121</guid>
<link>https://academictorrents.com/details/d247b92fa7b606e0914367c0839365499dd20121</link>
<description>A free and open dataset of 7690 10-second audio clips sampled from the field-recording tag in the Freesound audio archive. The dataset is designed for use in research related to data mining in audio archives of field recordings / soundscapes. Audio is standardised, and audio and metadata are Creative Commons licensed. For more information see http://arxiv.org/abs/1309.5275</description>
<size>5955911680</size>
</item><item>
<title>NLCD2006 Percent Developed Imperviousness</title>
<category>Dataset</category>
<infohash>636d5368f1a58f4a35dcc34b31c930ecc586dce4</infohash>
<guid>https://academictorrents.com/details/636d5368f1a58f4a35dcc34b31c930ecc586dce4</guid>
<link>https://academictorrents.com/details/636d5368f1a58f4a35dcc34b31c930ecc586dce4</link>
<description>An updated circa 2006 percent developed imperviousness estimate layer for the conterminous United States for all pixels.</description>
<size>730242305</size>
</item><item>
<title>NLCD2006 Land Cover Change (NLCD2006_landcover_change_pixels_5-4-11_se5.zip)</title>
<category>Dataset</category>
<infohash>28a2fd1afbda8be43bec55b6c4c2c9cf1f5b9582</infohash>
<guid>https://academictorrents.com/details/28a2fd1afbda8be43bec55b6c4c2c9cf1f5b9582</guid>
<link>https://academictorrents.com/details/28a2fd1afbda8be43bec55b6c4c2c9cf1f5b9582</link>
<description>Land cover layer containing only those pixels identified as changed between NLCD2001 Land Cover Version 2.0 and NLCD2006 Land Cover products for the conterminous United States.</description>
<size>104431285</size>
</item><item>
<title>NLCD2001 Land Cover (Version 2.0) NLCD2001_landcover_v2_2-13-11.zip</title>
<category>Dataset</category>
<infohash>d7ddd7894fa83034a336ff2fe1da51a3061d5a0f</infohash>
<guid>https://academictorrents.com/details/d7ddd7894fa83034a336ff2fe1da51a3061d5a0f</guid>
<link>https://academictorrents.com/details/d7ddd7894fa83034a336ff2fe1da51a3061d5a0f</link>
<description>Version 2.0 of the 2001 land cover layer for the conterminous United States for all pixels. Updated version of NLCD2001 for direct comparison with NLCD2006</description>
<size>1070024120</size>
</item><item>
<title>NOAA Weather Data 2011</title>
<category>Dataset</category>
<infohash>e3e68948b2e01b01a415740cb6fa6fe918c971ac</infohash>
<guid>https://academictorrents.com/details/e3e68948b2e01b01a415740cb6fa6fe918c971ac</guid>
<link>https://academictorrents.com/details/e3e68948b2e01b01a415740cb6fa6fe918c971ac</link>
<description>This contains ISH/ISD data in directories by year.  Please note that ISH and ISD refer to the same data&amp;mdash;Integrated Surface Data, sometimes called Integrated Surface Hourly. The filenames correspond with the station numbers listed in the ish-history.txt file described below &amp;mdash; eg, 723150-03812-2006 corresponds with USAF number 723150 and WBAN number 03812.</description>
<size>4568417610</size>
</item><item>
<title>RF Capture WFM-98MHz-25Msps.dat</title>
<category>Dataset</category>
<infohash>66b22eda63bbd202a2ec54979f197b11a41abef1</infohash>
<guid>https://academictorrents.com/details/66b22eda63bbd202a2ec54979f197b11a41abef1</guid>
<link>https://academictorrents.com/details/66b22eda63bbd202a2ec54979f197b11a41abef1</link>
<description/>
<size>1028171224</size>
</item><item>
<title>RF Capture gsm.dat.bz2</title>
<category>Dataset</category>
<infohash>dfd956b3e86279213b3b9a82a3156990dc735cac</infohash>
<guid>https://academictorrents.com/details/dfd956b3e86279213b3b9a82a3156990dc735cac</guid>
<link>https://academictorrents.com/details/dfd956b3e86279213b3b9a82a3156990dc735cac</link>
<description/>
<size>48541461</size>
</item><item>
<title>RF Capture noaa-12_256k.dat.bz</title>
<category>Dataset</category>
<infohash>191038d206e7b357f5550ea177cd6dd920bb5483</infohash>
<guid>https://academictorrents.com/details/191038d206e7b357f5550ea177cd6dd920bb5483</guid>
<link>https://academictorrents.com/details/191038d206e7b357f5550ea177cd6dd920bb5483</link>
<description/>
<size>438399179</size>
</item><item>
<title>RF Capture fm_capture-burlington.meta.32fc</title>
<category>Dataset</category>
<infohash>0ee508eae4bd5cb57238b4125852c3fe347c8d26</infohash>
<guid>https://academictorrents.com/details/0ee508eae4bd5cb57238b4125852c3fe347c8d26</guid>
<link>https://academictorrents.com/details/0ee508eae4bd5cb57238b4125852c3fe347c8d26</link>
<description/>
<size>800017271</size>
</item><item>
<title>RF Capture atsc_ref_data_low_snr_6.4Msps.16sc</title>
<category>Dataset</category>
<infohash>7c122f9c78ef503f28b9fd312798804a0be47aff</infohash>
<guid>https://academictorrents.com/details/7c122f9c78ef503f28b9fd312798804a0be47aff</guid>
<link>https://academictorrents.com/details/7c122f9c78ef503f28b9fd312798804a0be47aff</link>
<description/>
<size>450805760</size>
</item><item>
<title>RF Capture philly_93.3MHz_500ksps.32fc</title>
<category>Dataset</category>
<infohash>9a36a828aaf44f853b4f2f4c8291498afb113c96</infohash>
<guid>https://academictorrents.com/details/9a36a828aaf44f853b4f2f4c8291498afb113c96</guid>
<link>https://academictorrents.com/details/9a36a828aaf44f853b4f2f4c8291498afb113c96</link>
<description>This file was captured using a USRP N210 with a WBX board in Philadelphia on FM channel 93.3 MHz, a local rock station. The command to capture this using the latest GNU Radio with UHD is:</description>
<size>40000000</size>
</item><item>
<title>The MacTeX 2013 Distribution</title>
<category>Dataset</category>
<infohash>1156df74d774c3447912b00866aaf6520115594d</infohash>
<guid>https://academictorrents.com/details/1156df74d774c3447912b00866aaf6520115594d</guid>
<link>https://academictorrents.com/details/1156df74d774c3447912b00866aaf6520115594d</link>
<description>TeX (= tau epsilon chi, and pronounced similar to "blecch", not to the state known for  Tex-Mex  chili) is a computer language designed for use in typesetting; in particular, for typesetting math and other technical (from greek "techne" = art/craft, the stem of  technology ) material.</description>
<size>2364633987</size>
</item><item>
<title>The PDS Universal Planetary Coordinates (UPC) Database, Mars DB</title>
<category>Dataset</category>
<infohash>f388284af06520160a7ad460915216ba6a46d401</infohash>
<guid>https://academictorrents.com/details/f388284af06520160a7ad460915216ba6a46d401</guid>
<link>https://academictorrents.com/details/f388284af06520160a7ad460915216ba6a46d401</link>
<description/>
<size>4223592264</size>
</item><item>
<title>Viking Merged Color Mosaic</title>
<category>Dataset</category>
<infohash>059ed25558b4587143db637ac3ca94bebb57d88d</infohash>
<guid>https://academictorrents.com/details/059ed25558b4587143db637ac3ca94bebb57d88d</guid>
<link>https://academictorrents.com/details/059ed25558b4587143db637ac3ca94bebb57d88d</link>
<description>![](http://i.imgur.com/4VLmMSg.png) |Attribute|Value| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;-|&amp;mdash;&amp;mdash;-| |Resolution:	|64ppd| |Scale:	|920mpp| |Projection:	|Simple cylindrical, -180E to 180E, 90N to -90N,  ocentric| |Layout:|	Single file| |Total Size:|	23040x11520 pixels| |Details:	 |Viking color mosaic sharpened with MDIM 1.0. 64ppd/920m. NASA Viking Orbiter.|</description>
<size>276679753</size>
</item><item>
<title>MOLA Shaded Relief / Colorized Elevation</title>
<category>Dataset</category>
<infohash>06f73b5ca501194ba1cd3aa918bd801b84ea7050</infohash>
<guid>https://academictorrents.com/details/06f73b5ca501194ba1cd3aa918bd801b84ea7050</guid>
<link>https://academictorrents.com/details/06f73b5ca501194ba1cd3aa918bd801b84ea7050</link>
<description>|Attribute|Value| |&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;|&amp;mdash;&amp;mdash;&amp;mdash;&amp;mdash;| |Resolution:|	128 ppd| |Scale:|	463.1 mpp| |Projection:	|Simple cylindrical, -180E to 180E, 90N to -90N,  ocentric| |Layout:|	30 x 30 degree tiles| |Total Size:|	46080 x 23040 pixels| |Details:	| Shaded relief derived from altimetry, colorized by elevation. 128 ppd/460m. NASA MGS/MOLA.| |Source:	| http://pds-geosciences.wustl.edu/missions/mgs/mola.html| |Notes:	|Data populated from 88N to 88S| ![](http://i.imgur.com/ew6ssh8.png)</description>
<size>2767208648</size>
</item><item>
<title>THEMIS Day IR Global Mosaic</title>
<category>Dataset</category>
<infohash>8b202f57d4bf3304c10fcd11bdee224c3a9ff16f</infohash>
<guid>https://academictorrents.com/details/8b202f57d4bf3304c10fcd11bdee224c3a9ff16f</guid>
<link>https://academictorrents.com/details/8b202f57d4bf3304c10fcd11bdee224c3a9ff16f</link>
<description>Version:	2.0 Release Date:	November 16, 2006 Resolution:	256 ppd Scale:	231.55 mpp Projection:	Simple cylindrical, -180E to 180E, 90N to -90N,  ocentric Layout:	30 x 30 degree tiles Total Size:	92160 x 46080 pixels Details:	Daytime thermal infrared (12.57um) mosaic. 256 ppd/230m. NASA Mars Odyssey/THEMIS Notes:	Gores are filled using PNG transparency.</description>
<size>4391605981</size>
</item><item>
<title>TNC - Freshwater Ecoregions</title>
<category>Dataset</category>
<infohash>fb993412755d0bdc8aabd9c6959215293958b220</infohash>
<guid>https://academictorrents.com/details/fb993412755d0bdc8aabd9c6959215293958b220</guid>
<link>https://academictorrents.com/details/fb993412755d0bdc8aabd9c6959215293958b220</link>
<description>The Freshwater Ecoregions Of the World (FEOW) provide a global biogeographic regionalization of the Earth s freshwater biodiversity. This version of the FEOW, modified by The Nature Conservancy, includes additional tabular data describing Major Habitat Types (MHTs, similar to terrestrial biomes, but unpublished).You can read more about the FEOW, and obtain the unmodified shapefile at www.feow.org.</description>
<size>14114181</size>
</item><item>
<title>TNC - Terrestrial Ecoregions</title>
<category>Dataset</category>
<infohash>fbfe954c816cf914709d9483134df7448337eb9e</infohash>
<guid>https://academictorrents.com/details/fbfe954c816cf914709d9483134df7448337eb9e</guid>
<link>https://academictorrents.com/details/fbfe954c816cf914709d9483134df7448337eb9e</link>
<description>This is the master spatial data layer for TNC s terrestrial ecoregions of the world, exported from the geodatabase listed above. Note that it includes Mangroves, Inland Water, and Rock and Ice MHTs, although they are not being handled by terrestrial assessments. This layer is based on WWF s ecoregions outside the United States, and loosely based on Bailey s ecoregions (from the USDA Forest Service) within the United States. terr-ecoregions-TNC: tnc_terr_ecoregions.dbf tnc_terr_ecoregions.lyr tnc_terr_ecoregions.prj tnc_terr_ecoregions.sbn tnc_terr_ecoregions.sbx tnc_terr_ecoregions.shp tnc_terr_ecoregions.shp.xml tnc_terr_ecoregions.shx</description>
<size>52749054</size>
</item><item>
<title>TNC - Marine Ecoregions</title>
<category>Dataset</category>
<infohash>551952d08103200cf5034fb74adf71643aa0c643</infohash>
<guid>https://academictorrents.com/details/551952d08103200cf5034fb74adf71643aa0c643</guid>
<link>https://academictorrents.com/details/551952d08103200cf5034fb74adf71643aa0c643</link>
<description>The Marine Ecoregions Of the World (MEOW) data set is a biogeographic classification of the world s coasts and shelves. The ecoregions nest within the broader biogeographic tiers of Realms and Provinces. Further details about the MEOW system and PDFs of the BioScience paper the comprehensive listing of sources are available from www.worldwildlife.org/MEOW/ and www.nature.org/MEOW.</description>
<size>402826</size>
</item><item>
<title>Massachusetts USGS 30cm Color Ortho Imagery (2013) - JPEG2000 Format</title>
<category>Dataset</category>
<infohash>82c64b111b07ff855b8966701a13a25512687521</infohash>
<guid>https://academictorrents.com/details/82c64b111b07ff855b8966701a13a25512687521</guid>
<link>https://academictorrents.com/details/82c64b111b07ff855b8966701a13a25512687521</link>
<description>In spring 2013, the U.S. Geological Survey contracted for true-color imagery covering three urban areas in Massachusetts as defined by the USGS. Those areas are the metropolitan Boston area (and beyond), the greater Worcester area, and the greater Springfield area. Image type for all of the areas is 24 bit, 4-band (red, green, blue, and near-infrared RGBN) portions of the spectrum. Each band has pixel values ranging 0-255. Pixel resolution is 0.3 meters (30 centimeters), or approximately one foot. This digital orthoimagery can serve a variety of purposes, from general planning, to field reference for spatial analysis, to a tool for data development and revision of vector maps. It can also serve as a reference layer or basemap for myriad applications inside geographic information system (GIS) software. It was created to provide easily accessible geospatial data which is readily available to enhance the capability of Federal, State, and local emergency responders, as well as plan for homeland security efforts. These data also support The National Map. Aerial Acquisition The raw ADS80 image data were collected by Fugro EarthData, Inc. at about 2,896 meters above mean terrain during mid to late April 2013. The source imagery is cloud free, and was acquired in generally leaf-off conditions. Images are available for download in the JPEG2000 format, at a 20:1 compression ratio, 4 bands (RGBN), as 1,500 meters × 1,500 meters tiles.</description>
<size>25740020249</size>
</item><item>
<title>Massachusetts USGS 15cm Color Ortho Imagery (2008/2009) - MrSID Format</title>
<category>Dataset</category>
<infohash>166bf2b135167e5af37a35c4f09c25b453936496</infohash>
<guid>https://academictorrents.com/details/166bf2b135167e5af37a35c4f09c25b453936496</guid>
<link>https://academictorrents.com/details/166bf2b135167e5af37a35c4f09c25b453936496</link>
<description>In spring 2008, the U.S. Geological Survey, as part of its Boston 133 Cities Urban Area mapping program, contracted for true-color imagery covering the metropolitan Boston area and beyond. Image type for the entire region (more than 1.7 million acres) is 24-bit, 3-band (red, green, blue) natural color. Each band has pixel values ranging 0-255. Pixel resolution is 30 cm., or approximately one foot. Additionally, 30 municipalities participated in the Boston Upgrade of the USGS project; these cities and towns contributed funding for separate flights to produce 4-band (red, green, blue, near-infrared) imagery. Pixel resolution for these images is 15 centimeters (approximately 6 inches). In spring 2009, USGS continued the project and 4-band 30cm imagery was obtained for the remainder of the state. Additionally, 14 municipalities provided funding for 4-band 15cm imagery to cover their communities. This digital orthoimagery can serve a variety of purposes, from general planning, to field reference for spatial analysis, to a tool for data development and revision of vector maps. It can also serve as a reference layer or basemap for myriad applications inside geographic information system (GIS) software. Images are available for download in the MrSID Generation 2 format, at 15:1 lossy compression ratio, 3 bands (RGB), as 1,500 meters × 1,500 meters tiles (based on the 2008/2009 USGS Color Ortho Index coq2008-09_index.pdf tiling scheme; refer to the 8-digit numbers in each tile).</description>
<size>20478626686</size>
</item><item>
<title>Arizona State University Flixster Data Set</title>
<category>Dataset</category>
<infohash>4960373ea6dec89153639b0975ea92f9e3d3c914</infohash>
<guid>https://academictorrents.com/details/4960373ea6dec89153639b0975ea92f9e3d3c914</guid>
<link>https://academictorrents.com/details/4960373ea6dec89153639b0975ea92f9e3d3c914</link>
<description>Flixster is a social movie site allowing users to share movie ratings, discover new movies and meet others with similar movie taste. Number of Nodes: 2523386 Number of Edges: 9197338 Missing Values? no Source: N/A Data Set Information: 2 files are included: 1. nodes.csv &amp;mdash; it s the file of all the users. This file works as a dictionary of all the users in this data set. It s useful for fast reference. It contains all the node ids used in the dataset 2. edges.csv &amp;mdash; this is the friendship network among the users. The friends are represented using edges. Here is an example. 1,2 This means user with id "1" is friend with user id "2". Attribute Information: Flixster is a social movie site allowing users to share movie ratings, discover new movies and meet others with similar movie taste. This contains the friendship network crawled in December 2010 by Javier Parra (Javier.Parra@asu.edu). For easier understanding, all the contents are organized in CSV file format. -. Basic statistics Number of Nodes: 2,523,386 Number of Edges: 9,197,338</description>
<size>36140875</size>
</item><item>
<title>Arizona State University Twitter Data Set </title>
<category>Dataset</category>
<infohash>2399616d26eeb4ae9ac3d05c7fdd98958299efa9</infohash>
<guid>https://academictorrents.com/details/2399616d26eeb4ae9ac3d05c7fdd98958299efa9</guid>
<link>https://academictorrents.com/details/2399616d26eeb4ae9ac3d05c7fdd98958299efa9</link>
<description>Twitter is a social news website. It can be viewed as a hybrid of email, instant messaging and sms messaging all rolled into one neat and simple package. It s a new and easy way to discover the latest news related to subjects you care about. |Attribute|Value| |-|-| |Number of Nodes: |11316811| |Number of Edges: |85331846| |Missing Values? |no| |Source:| N/A| ##Data Set Information: 1. nodes.csv &amp;mdash; it s the file of all the users. This file works as a dictionary of all the users in this data set. It s useful for fast reference. It contains all the node ids used in the dataset 2. edges.csv &amp;mdash; this is the friendship/followership network among the users. The friends/followers are represented using edges. Edges are directed. Here is an example. 1,2 This means user with id "1" is followering user with id "2". ##Attribute Information: Twitter is a social news website. It can be viewed as a hybrid of email, instant messaging and sms messaging all rolled into one neat and simple package. It s a new and easy way to discover the latest news related to subjects you care about.</description>
<size>354770146</size>
</item><item>
<title>UK Road Safety Data 1979 to 2004</title>
<category>Dataset</category>
<infohash>c7d2d7a91ae3fd0256dd2ba2d7344960cb3c4dbb</infohash>
<guid>https://academictorrents.com/details/c7d2d7a91ae3fd0256dd2ba2d7344960cb3c4dbb</guid>
<link>https://academictorrents.com/details/c7d2d7a91ae3fd0256dd2ba2d7344960cb3c4dbb</link>
<description>These files provide detailed road safety data about the circumstances of personal injury road accidents in GB from 1979, the types (including Make and Model) of vehicles involved and the consequential casualties. The statistics relate only to personal injury accidents on public roads that are reported to the police, and subsequently recorded, using the STATS19 accident reporting form. Also includes: Results of breath-test screening data from recently introduced digital breath testing devices, as provided by Police Authorities in England and Wales Results of blood alcohol levels (milligrams / 100 millilitres of blood) provided by matching coroners? data (provided by Coroners in England and Wales and by Procurators Fiscal in Scotland) with fatality data from the STATS19 police data of road accidents in Great Britain. For cases when the Blood Alcohol Levels for a fatality are "unknown" are a consequence of an unsuccessful match between the two data sets. Open Government Licence</description>
<size>253669263</size>
</item><item>
<title>Massachusetts 1:5,000 Color Ortho Imagery (2005) - JPEG 2000 Format (Original)</title>
<category>Dataset</category>
<infohash>6b4075043dc071daa380dc1668a0ad79b2bb52b3</infohash>
<guid>https://academictorrents.com/details/6b4075043dc071daa380dc1668a0ad79b2bb52b3</guid>
<link>https://academictorrents.com/details/6b4075043dc071daa380dc1668a0ad79b2bb52b3</link>
<description>Overview These medium resolution true color images are considered the new "basemap" for the Commonwealth by MassGIS. The photography for the entire commonwealth was captured in April 2005 when deciduous trees were mostly bare and the ground was generally free of snow. Image type is 4-band (RGBN) natural color (Red, Green, Blue) and Near infrared in 8 bits (values ranging 0-255) per band format. Image horizontal accuracy is +/-3 meters at the 95% confidence level at the nominal scale of 1:5,000. This digital orthoimagery can serve a variety of purposes, from general planning, to field reference for spatial analysis, to a tool for development and revision of vector maps. It can also serve as a reference layer or basemap for myriad applications inside geographic information system (GIS) software. The project was funded by the Executive Office of Environmental Affairs, the Department of Environmental Protection, the Massachusetts Highway Department, and the Department of Public Health.. Production Sanborn LLC of Colorado Springs, CO, performed all work for this project. The source imagery was acquired with a Vexcel Ultracam digital camera at a flying height of 5,070 meters above mean terrain and an approximate pixel resolution of 45 cm. Forward overlap was approximately 60%, except 80% in areas with tall structures (downtown Boston, Worcester, and Springfield), in order to reduce building lean, with sidelap of 33%. The entire state was covered by about 5500 image frames, captured over seven days from April 9 through April 17, 2005. The ground control used to support the mapping was collected by photographic identification of strategic points. The ground control coordinates were collected via GPS ground survey techniques. Aerial Triangulation was performed on softcopy workstations using Intergraph ISAT software for photo measurement and matching. The final bundle adjustment was performed using BINGO 5.2 software. A new digital elevation model was stereo compiled for the entire State from the newly acquired 2005 imagery. The DTM includes mass points, soft breaklines and hard breaklines. The images were ortho-rectified using METRO, Sanborn s proprietary software. Bridges were modeled in 3-D using standard photogrammetric stereo-compilation techniques on softcopy workstations. Sanborn s Metro process rectifies the bridges using the 3-Dimensional model using similar methodologies for correcting the positional accuracy of other ground features. The bridges were uniquely coded and later removed from the final deliverable DTM file. Imagery is georeferenced to Massachusetts State Plane Mainland (Lambert Conformal Conic Projection) NAD83 coordinate system, denominated in meters. Color balancing was performed using METRO_NICE software. The resulting images were mosaicked into one seamless database of imagery and extracted to match the existing MassGIS Orthophoto Index Grid tile layout (each image tile covers 4,000x4,000 meters on the ground.). Images were quality-controlled by Sanborn using Adobe PhotoShop software. Final deliverables included 1/2-meter pixel resolution GeoTiff images with supplementary tfw files and metadata. MassGIS quality assurance included rigorous independent checks of the spatial accuracy using other datasets of significantly higher accuracy, and field work that included the capture of highly accurate GPS points that were compared to the same locations appearing on the deliverables. MassGIS also assessed the visual quality and appearance of the images. Distribution Due to the large size of the original half-meter GeoTIFF images, MassGIS is also making these images available in the compressed MrSID and JPEG 2000 (JP2) formats. Options include images tiled by the orthophoto index as wells as large regional mosaics, which comprise from 26 to 73 ortho index tiles. Users may access the JPG2000 data by free download from the MassGIS ftp server or by ordering the Mosaics and MrSID tiles data on CD or DVD. Details are provided below. Original vs. "Contrast Stretched" Imagery MassGIS has produced a set of "Contrast Stretched" MrSID and JP2 data for users who do not have the software tools to modify the appearance of the original imagery. This second set of compressed data was produced from a set of GeoTIFFs that MassGIS modified with a 2.75 standard deviation linear contrast stretch in Erdas Imagine software. A linear contrast stretch is a simple way to improve the visible contrast of an image by changing the individual values of the pixels in the image. Usually, a contrast stretch is performed only on the display device (screen, printer, etc.), so that the data file values do not change. In this case, the stretched pixel values were saved to the tiffs and the tiffs were used to make the second set of MrSID and JP2 files. MassGIS is making this second set of images available for those whose software does not permit display adjustments, or who simply prefer not to adjust the contrast. These contrast stretched images may help solve some of the problems that some users encountered with getting the original images to look the way they wanted. These new images have a much greater contrast when compared to the originals. The drawback is that the stretch is "fixed", so that you cannot recoup the original pixel values. With the original set of images (GeoTIFF, MrSID, and JP2 formats), the user can achieve the same type of contrast adjustment seen in the second set of imagery and still make use of the full range of data values acquired by the digital cameras. Here are screen shots that compare the same area in the original MrSIDs (with no stretch or modification) and the contrast stretched imagery. To learn how to adjust the appearance of the original imagery to your liking, see the Display Options page pdf format of    COQ 2005 Display Options    doc format of COQ 2005 Display Options DOC file size 1MB . Free Download Images in the following formats are available for download as 4 km tiles (based on the Ortho Index tiling scheme) JPEG 2000, lossy, at 16:1 compression ratio, 4 bands (RGB and IR).15 MB each. Two sets: From original GeoTIFFs From contrast stretched GeoTIFFs</description>
<size>24759011328</size>
</item><item>
<title>Massachusetts 1:10,000 Coastal Color Orthophoto Images (1994) - MrSID Format</title>
<category>Dataset</category>
<infohash>ddd95cb0ff0ecca12d4803e4890596f26e1218d0</infohash>
<guid>https://academictorrents.com/details/ddd95cb0ff0ecca12d4803e4890596f26e1218d0</guid>
<link>https://academictorrents.com/details/ddd95cb0ff0ecca12d4803e4890596f26e1218d0</link>
<description>Overview These color coastal orthophotographs were generated through a cooperative effort between the Massachusetts Coastal Zone Management Office, the NOAA Photogrammetry Division and the National Geodetic Survey. The data covers most of the coastal zone region. Digital orthophoto production was provided by Photo Science Inc. of Gaithersburg Maryland. The data set is tiled identically to the MassGIS black and white orthophotos for both the mainland and island regions (398 images; see the Coastal Color Orthophotos Index datalayer description). Additionally, one more image was created for Noman s Land and is not based on the same index. Needs Title-coqimg Click to see the full size image. Methodology The color aerial photography was captured in September and October of 1994 by the Photogrammetry Division of NOAA. The scale of the original photography is 1:48,000. Differential airborne GPS was used for control. Approximately 31 flight lines were conducted, with the orientation of the flight lines designed to cover the maximum area of shoreline. Approximately 360 were captured. Approximately 16 ground panels were placed in the field and surveyed. Aerotriangulation was conducted by the Photogrammetry Division utilizing analytical stereo plotters. The control was processed using 3 block areas: A) North of Boston, B) Boston south including the Elizabeth Islands, and C) Martha?s Vineyard with Nantucket. Control was developed to provide an accuracy that exceeds NMAS of 1:10,000. In large portions of the area, control exceeds the NMAS for 1:7,000. Diapositives were scanned for a final output resolution of 1.0 meter. Scanning was done to match the diapositives as closely as possible. Bulk radiometric adjustments of the imagery was conducted using Adobe Photoshop "auto levels" to remove the green haze and to stretch the contrast. Mass point and breakline elevations were created and used in the production. Only mass point elevations are available for the area. Elevation data was developed primarily for the purpose of orthorectification, and not for detailed contouring. Images for Martha s Vineyard and Nantucket were originally georeferenced to the Massachusetts State Plane Island Zone coordinate system, but have been projected in ArcInfo to the Mainland Zone for consistency with other MassGIS data layers. These mainland-zone images for the islands became available in June 2001; the images for all other areas were released in February, 1998. The original one-meter images are 48 MB per tile. Two-meter versions of the images, resampled in ArcInfo, are 12 MB each. The tiles are in TIFF format and are accompanied by .tfw header files for georeferencing in GIS software. In addition, versions of the one-meter images in the MrSID format have been created at 30:1 compression with 8 zoom levels. These are available with .sdw header files as individual MrSID images as well as one single mosaic comprising the entire coastline, including Martha s Vineyard and Nantucket. The one-meter SIDs may be downloaded or ordered on DVD (from the Digital Data Products section of the order form). The MrSID mosaic may be purchased on DVD because of file size considerations.</description>
<size>769838156</size>
</item><item>
<title>THEMIS Night IR 100m Global Mosaic</title>
<category>Dataset</category>
<infohash>517cc93da6740d759ff02a845795e839bbebeb67</infohash>
<guid>https://academictorrents.com/details/517cc93da6740d759ff02a845795e839bbebeb67</guid>
<link>https://academictorrents.com/details/517cc93da6740d759ff02a845795e839bbebeb67</link>
<description>Version: 8.0 Release Date:	June 23rd, 2010 Resolution: 592.75 ppd Scale: 99.7 mpp Projection: Simple cylindrical, 0E to 360E, 60N to 60S,  ocentric Layout: 60 x 30 degree tiles Total Size: 213391 x 88914 pixels Details: Nighttime thermal infrared (12.57um) mosaic. 593 ppd/100m. NASA Mars Odyssey/THEMIS</description>
<size>37947302766</size>
</item><item>
<title>THEMIS Day IR 100m Global Mosaic</title>
<category>Dataset</category>
<infohash>8b89d5825ca251ea355277d1f6e014891aa24875</infohash>
<guid>https://academictorrents.com/details/8b89d5825ca251ea355277d1f6e014891aa24875</guid>
<link>https://academictorrents.com/details/8b89d5825ca251ea355277d1f6e014891aa24875</link>
<description>Version:	11 Release Date:	June 23rd, 2010 Resolution:	592.75 ppd Scale:	99.7 mpp Projection:	Simple cylindrical, 0E to 360E, 90N to -90N,  ocentric Total Size:	213391 x 106699 pixels Details:	Daytime thermal infrared (12.57um) mosaic. 593 ppd/99m. NASA Mars Odyssey/THEMIS</description>
<size>45537078919</size>
</item><item>
<title>UCI Machine Learning Datasets 12/2013</title>
<category>Dataset</category>
<infohash>7fafb101f9c7961f9b840daeb4af43039107ddef</infohash>
<guid>https://academictorrents.com/details/7fafb101f9c7961f9b840daeb4af43039107ddef</guid>
<link>https://academictorrents.com/details/7fafb101f9c7961f9b840daeb4af43039107ddef</link>
<description>The UCI Machine Learning Repository is a collection of databases, domain theories, and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms. The archive was created as an ftp archive in 1987 by David Aha and fellow graduate students at UC Irvine. Since that time, it has been widely used by students, educators, and researchers all over the world as a primary source of machine learning data sets. As an indication of the impact of the archive, it has been cited over 1000 times, making it one of the top 100 most cited "papers" in all of computer science. The current version of the web site was designed in 2007 by Arthur Asuncion and David Newman, and this project is in collaboration with Rexa.info at the University of Massachusetts Amherst. Funding support from the National Science Foundation is gratefully acknowledged. Many people deserve thanks for making the repository a success. Foremost among them are the donors and creators of the databases and data generators. Special thanks should also go to the past librarians of the repository: David Aha, Patrick Murphy, Christopher Merz, Eamonn Keogh, Cathy Blake, Seth Hettich, and David Newman.</description>
<size>16365432846</size>
</item><item>
<title>Visual Object Classes Challenge 2012 Dataset (VOC2012) VOCtrainval_11-May-2012.tar</title>
<category>Dataset</category>
<infohash>df0aad374e63b3214ef9e92e178580ce27570e59</infohash>
<guid>https://academictorrents.com/details/df0aad374e63b3214ef9e92e178580ce27570e59</guid>
<link>https://academictorrents.com/details/df0aad374e63b3214ef9e92e178580ce27570e59</link>
<description>##Introduction The main goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are: * Person: person * Animal: bird, cat, cow, dog, horse, sheep * Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train * Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor There are three main object recognition competitions: classification, detection, and segmentation, a competition on action classification, and a competition on large scale recognition run by ImageNet. In addition there is a "taster" competition on person layout. ##Classification/Detection Competitions Classification: For each of the twenty classes, predicting presence/absence of an example of that class in the test image. Detection: Predicting the bounding box and label of each object from the twenty target classes in the test image. 20 classes ![](http://i.imgur.com/WmLRN4p.png) * aeroplane * bicycle * bird * boat * bottle * bus * car * cat * chair * cow * dining table * dog * horse * motorbike * 	person * potted plant * sheep * sofa * train * tv/monitor Participants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the twenty object classes. The challenge allows for two approaches to each of the competitions: 1. Participants may use systems built or trained using any methods or data excluding the provided test sets. 2. Systems are to be built or trained using only the provided training/validation data. The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set. Segmentation Competition Segmentation: Generating pixel-wise segmentations giving the class of the object visible at each pixel, or "background" otherwise. ![](https://i.imgur.com/ek0NbVK.png) ##Action Classification Competition Action Classification: Predicting the action(s) being performed by a person in a still image. ![](https://i.imgur.com/w8tr9hs.png) * jumping * phoning * playinginstrument * reading * ridingbike * ridinghorse * running * takingphoto * usingcomputer * walking In 2012 there are two variations of this competition, depending on how the person whose actions are to be classified is identified in a test image: (i) by a tight bounding box around the person; (ii) by only a single point located somewhere on the body. The latter competition aims to investigate the performance of methods given only approximate localization of a person, as might be the output from a generic person detector. ##ImageNet Large Scale Visual Recognition Competition The goal of this competition is to estimate the content of photographs for the purpose of retrieval and automatic annotation using a subset of the large hand-labeled ImageNet dataset (10,000,000 labeled images depicting 10,000+ object categories) as training. Test images will be presented with no initial annotation - no segmentation or labels - and algorithms will have to produce labelings specifying what objects are present in the images. In this initial version of the challenge, the goal is only to identify the main objects present in images, not to specify the location of objects. Further details can be found at the ImageNet website. ##Person Layout Taster Competition Person Layout: Predicting the bounding box and label of each part of a person (head, hands, feet). ![](https://i.imgur.com/Hphaauf.png) ##Data To download the training/validation data, see the development kit. The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Annotation was performed according to a set of guidelines distributed to all annotators. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Images for the action classification task are disjoint from those of the classification/detection/segmentation tasks. They have been partially annotated with people, bounding boxes, reference points and their actions. Annotation was performed according to a set of guidelines distributed to all annotators. Images for the person layout taster, where the test set is disjoint from the main tasks, have been additionally annotated with parts of the people (head/hands/feet). The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission. In the second stage, the test set will be made available for the actual competition. As in the VOC2008-2011 challenges, no ground truth for the test data will be released. The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. Statistics of the database are online.</description>
<size>1999639040</size>
</item><item>
<title>Crater Detection via Genetic Search Methods to Reduce Image Features</title>
<category>Paper</category>
<infohash>8ae530c0c1466ba8feee9914236cc900ad2f708e</infohash>
<guid>https://academictorrents.com/details/8ae530c0c1466ba8feee9914236cc900ad2f708e</guid>
<link>https://academictorrents.com/details/8ae530c0c1466ba8feee9914236cc900ad2f708e</link>
<description>Recent approaches to crater detection have been inspired by face detection s use of gray-scale texture features. Using gray-scale texture features for supervised machine learning crater detection algorithms provides better classification of craters in planetary images than previous methods. When using Haar features it is typical to generate thousands of numerical values from each candidate crater image. This magnitude of image features to extract and consider can spell disaster when the application is an entire planetary surface. One solution is to reduce the number of features extracted and considered in order to increase accuracy as well as speed. Feature subset selection provides the operational classifiers with a concise and denoised set of features by reducing irrelevant and redundant features. Feature subset selection is known to be NP-hard. To provide an efficient suboptimal solution, four genetic algorithms are proposed to use greedy selection, weighted random selection, and simulated annealing to distinguish discriminate features from indiscriminate features. Inspired by analysis regarding the relationship between subset size and accuracy, a squeezing algorithm is presented to shrink the genetic algorithm s chromosome cardinality during the genetic iterations. A significant increase in the classification performance of a Bayesian classifier in crater detection using image texture features is observed.</description>
<size>19307248</size>
</item><item>
<title>Bernoulli trials based feature selection for crater detection</title>
<category>Paper</category>
<infohash>37499de2b944dacc88cd295d3f9631670bd6abe6</infohash>
<guid>https://academictorrents.com/details/37499de2b944dacc88cd295d3f9631670bd6abe6</guid>
<link>https://academictorrents.com/details/37499de2b944dacc88cd295d3f9631670bd6abe6</link>
<description>Counting craters is a fundamental task of planetary sci-ence because it provides the only tool for measuring relativeages of planetary surfaces. However, advances in surveyingcraters present in data gathered by planetary probes havenot kept up with advances in data collection. One chal-lenge of auto-detecting craters in images is to identify an images features that discriminate it between craters andother surface objects. The problem of optimal feature se-lection is known to be NP-hard and the search is compu-tationally intractable. In this paper we propose a wrapperbased randomized feature selection method to efficiently se-lect relevant features for crater detection. We design andimplement a dynamic programming algorithm to search fora relevant feature subset by removing irrelevant features andminimizing a cost objective function simultaneously. In or-der to only remove irrelevant features we use Bernoulli Tri-als to calculate the probability of such a case using the costfunction. Our proposed algorithms are empirically evaluatedon a large high-resolution Martian image exhibiting a heav-ily cratered Martian terrain characterized by heterogeneoussurface morphology. The experimental results demonstratethat the proposed approach achieves a higher accuracy thanother existing randomized approaches to a large extent withless runtime.</description>
<size>2379301</size>
</item><item>
<title>Crater Dataset</title>
<category>Dataset</category>
<infohash>30748b1a7ac99b1c5ff66f0bc5c5f7428ed035c5</infohash>
<guid>https://academictorrents.com/details/30748b1a7ac99b1c5ff66f0bc5c5f7428ed035c5</guid>
<link>https://academictorrents.com/details/30748b1a7ac99b1c5ff66f0bc5c5f7428ed035c5</link>
<description>Dataset Objective:</description>
<size>32489372</size>
</item><item>
<title>CVPR Indoor Scene Recognition</title>
<category>Dataset</category>
<infohash>59aa0ad684e5d849f68bad9a6d43a9000a927164</infohash>
<guid>https://academictorrents.com/details/59aa0ad684e5d849f68bad9a6d43a9000a927164</guid>
<link>https://academictorrents.com/details/59aa0ad684e5d849f68bad9a6d43a9000a927164</link>
<description>![](http://web.mit.edu/torralba/www/allIndoors.jpg) Indoor scene recognition is a challenging open problem in high level vision. Most scene recognition models that work well for outdoor scenes perform poorly in the indoor domain. The main difficulty is that while some indoor scenes (e.g. corridors) can be well characterized by global spatial properties, others (e.g., bookstores) are better characterized by the objects they contain. More generally, to address the indoor scenes recognition problem we need a model that can exploit local and global discriminative information. ##Database The database contains 67 Indoor categories, and a total of 15620 images. The number of images varies across categories, but there are at least 100 images per category. All images are in jpg format. The images provided here are for research purposes only. ##Evaluation For the results in the paper we use a subset of the dataset that has the same number of training and testing samples per class. The partition that we use is: TrainImages.txt: contains the file names of each training image. Total 67*80 images TestImages.txt: contains the file names of each test image. Total 67*20 images ##Annotations A subset of the images are segmented and annotated with the objects that they contain. The annotations are in LabelMe format. ##Paper A. Quattoni, and A.Torralba. Recognizing Indoor Scenes. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009. ##Acknowledgments Thanks to Aude Oliva for helping to create the database of indoor scenes. Funding for this research was provided by NSF Career award (IIS 0747120)</description>
<size>2592010240</size>
</item><item>
<title>Effectiveness of Cybersecurity Competitions</title>
<category>Paper</category>
<infohash>30ec3bb79d95e4af3b92315a5a073fb10ec8a87d</infohash>
<guid>https://academictorrents.com/details/30ec3bb79d95e4af3b92315a5a073fb10ec8a87d</guid>
<link>https://academictorrents.com/details/30ec3bb79d95e4af3b92315a5a073fb10ec8a87d</link>
<description>There has been a heightened interest among U.S.government agencies to fund cybersecurity workforce development. These efforts include offering universities funding forstudent scholarships, funding for building capacity in cybersecurity education, as well as sponsoring cybersecurity competitions, games, and outreach programs. This paper examines the effectiveness of cybersecurity competitions in educating students.Our study shows that though competitions do pique students interest, the effectiveness of this approach in producing more high quality professionals can be limited. One reason is that the knowledge barrier to compete in these competitions is high. To be successful, students have to be proficient in operating systems,application services, software engineering, system administration and networking. Many Computer Science and InformationTechnology students do not feel qualified, and consequently this reduces participation from a wider student audience. Our approach takes aims at lowering this barrier to entry. We employ a hands-on learning methodology where students attend lectures on background knowledge on weekdays and practice what they learn in weekend workshops. A virtual networking environment is provided for students to practice network defense in the workshops and on their own time</description>
<size>318479</size>
</item><item>
<title>NLCD2006 Land Cover (NLCD2006_landcover_4-20-11_se5.zip)</title>
<category>Dataset</category>
<infohash>184551842564cb05ffc5368629537ffec58d6985</infohash>
<guid>https://academictorrents.com/details/184551842564cb05ffc5368629537ffec58d6985</guid>
<link>https://academictorrents.com/details/184551842564cb05ffc5368629537ffec58d6985</link>
<description>National Land Cover Database 2006 (NLCD2006) is a 16-class land cover classification scheme that has been applied consistently across the conterminous United States at a spatial resolution of 30 meters. NLCD2006 is based primarily on the unsupervised classification of Landsat Enhanced Thematic Mapper+ (ETM+) circa 2006 satellite data. NLCD2006 also quantifies land cover change between the years 2001 to 2006. The NLCD2006 land cover change product was generated by comparing spectral characteristics of Landsat imagery between 2001 and 2006, on an individual path/row basis, using protocols to identify and label change based on the trajectory from NLCD2001 products. It represents the first time this type of 30 meter resolution land cover change product has been produced for the conterminous United States. A formal accuracy assessment of the NLCD2006 land cover change product is planned for 2011. Generation of NLCD2006 products helped to identify some issues in the NLCD2001 land cover and percent developed imperviousness products only (there were no changes to the NLCD2001 percent canopy). These issues were evaluated and corrected, necessitating a reissue of NLCD2001 products (NLCD2001 Version 2.0) as part of the NLCD2006 release. A majority of the NLCD2001 updates occurred in coastal mapping zones where NLCD2001 was published prior to the completion of the National Oceanic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (C-CAP) 2001 land cover products. NOAA C-CAP 2001 land cover has now been seamlessly integrated with NLCD2001 land cover for all coastal zones. NLCD2001 percent developed imperviousness was also updated as part of this process.</description>
<size>1082594298</size>
</item><item>
<title>Mars Weekend: A Panel and Games at the Museum of Science Boston</title>
<category>Paper</category>
<infohash>b0700675b5b7756ba6243420a9db09380a5d27b2</infohash>
<guid>https://academictorrents.com/details/b0700675b5b7756ba6243420a9db09380a5d27b2</guid>
<link>https://academictorrents.com/details/b0700675b5b7756ba6243420a9db09380a5d27b2</link>
<description>This ongoing outreach project uniquely combines the data, systems, and resources of four existing NASA funded research projects on Mars robotic navigation (MER Participating Scientist project and ExoMars Pan- Cam project), intelligent Mars data processing (AISR Crater Detection project), and Lunar mapping (LRO Par- ticipating Scientist project). The project aims to stimu- late the public excitement about Mars and Lunar science and exploration and to enrich the public with expertise developed at The Ohio State University (OSU), the University of Massachusetts Boston (UMB), and the Lunar and Planetary Institute (LPI) through our outreach partner, the Museum of Science, Boston.</description>
<size>146895</size>
</item><item>
<title>Genetically Enhanced Feature Selection of Discriminative Planetary Crater Image Features</title>
<category>Paper</category>
<infohash>cb1655a57dd24345c9ea7a43c5ec09e03c7a0979</infohash>
<guid>https://academictorrents.com/details/cb1655a57dd24345c9ea7a43c5ec09e03c7a0979</guid>
<link>https://academictorrents.com/details/cb1655a57dd24345c9ea7a43c5ec09e03c7a0979</link>
<description>Using gray-scale texture features has recently become a new trend in supervised machine learning crater detection algorithms. To provide better classification of craters in planetary images, feature subset selection is used to reduce irrelevant and redundant features. Feature selection is known to be NP-hard. To provide an efficient suboptimal solution, three genetic algorithms are proposed to use greedy selection, weighted random selection, and simulated annealing to distinguish discriminate features from indiscriminate features. A significant increase in the classification ability of a Bayesian classifier in crater detection using image texture features.</description>
<size>580959</size>
</item><item>
<title>Wikipedia English Official Offline Edition (version 20130805) [Xprt]</title>
<category>Dataset</category>
<infohash>30ac2ef27829b1b5a7d0644097f55f335ca5241b</infohash>
<guid>https://academictorrents.com/details/30ac2ef27829b1b5a7d0644097f55f335ca5241b</guid>
<link>https://academictorrents.com/details/30ac2ef27829b1b5a7d0644097f55f335ca5241b</link>
<description>Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. For our advice about complying with these licenses, see Wikipedia:Copyrights.</description>
<size>10082006833</size>
</item></channel>
</rss>
