<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:academictorrents="http://academictorrents.com/" version="2.0">
<channel>
<title>VGG Oxford - Academic Torrents</title>
<description>collection curated by carandraug</description>
<link>https://academictorrents.com/collection/vgg-oxford</link>
<item>
<title>Learning a Part-Level Motion Prior for Articulated Objects (Dataset)</title>
<description>@article{,
title= {Learning a Part-Level Motion Prior for Articulated Objects},
journal= {},
author= {Ruining Li and Chuanxia Zheng and Christian Rupprecht and Andrea Vedaldi},
year= {},
url= {https://www.robots.ox.ac.uk/~vgg/research/DragAPart/},
abstract= {},
keywords= {},
terms= {},
license= {},
superseded= {}
}

</description>
<link>https://academictorrents.com/download/2e955e41f40147603641573b7e839efae9af9a7f</link>
</item>
<item>
<title>The Oxford-IIIT Pet Dataset (Dataset)</title>
<description>@article{,
title= {The Oxford-IIIT Pet Dataset},
journal= {},
author= {Omkar M Parkhi and Andrea Vedaldi and Andrew Zisserman and C. V. Jawahar},
year= {},
url= {https://www.robots.ox.ac.uk/~vgg/data/pets/},
abstract= {We have created a 37 category pet dataset with roughly 200 images for each class. The images have a large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed, head ROI, and pixel level trimap segmentation.},
keywords= {},
terms= {The dataset is available to download for commercial/research purposes under a Creative Commons Attribution-ShareAlike 4.0 International License. The copyright remains with the original owners of the images.},
license= {Creative Commons Attribution-ShareAlike 4.0 International License},
superseded= {}
}

</description>
<link>https://academictorrents.com/download/b18bbd9ba03d50b0f7f479acc9f4228a408cecc1</link>
</item>
<item>
<title>Synthetic Data for Text Localisation in Natural Images (Dataset)</title>
<description>@inproceedings{gupta16,
author= {Ankush Gupta and Andrea Vedaldi and Andrew Zisserman},
title= {Synthetic Data for Text Localisation in Natural Images},
booktitle= {IEEE Conference on Computer Vision and Pattern Recognition},
year= {2016},
abstract= {This is a synthetically generated dataset, in which word instances are placed in natural scene images, while taking into account the scene layout.

The dataset consists of *800 thousand* images with approximately *8 million* synthetic word instances. Each text instance is annotated with its text-string, word-level and character-level bounding-boxes.},
keywords= {},
terms= {You (the "Researcher"), have requested permission to use the SynthText in the Wild database (the "Database") at the University of Oxford. In exchange for such permission, the Researcher hereby agrees to the following terms and conditions:

1. Researcher shall use the Database only for non-commercial* research and educational purposes.

2. University of Oxford makes no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.

3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify University of Oxford, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.

4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.

5. University of Oxford reservers the right to terminate Researcher's access to the Database at any time.

6. If Researcher is employed by a for-profit, commercial entity*, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.

*  For commercial applications and licensing, contact Roy Azoulay at roy.azoulay@innovation.ox.ac.uk},
license= {},
superseded= {},
url= {https://www.robots.ox.ac.uk/~vgg/data/scenetext/}
}

</description>
<link>https://academictorrents.com/download/2dba9518166cbd141534cbf381aa3e99a087e83c</link>
</item>
<item>
<title>Reading Text in the Wild with Convolutional Neural Networks (Dataset)</title>
<description>@article{jaderberg16,
author= {Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman},
title= {Reading Text in the Wild with Convolutional Neural Networks},
journal= {International Journal of Computer Vision},
number= {1},
volume= {116},
pages= {1--20},
month= {jan},
year= {2016},
abstract= {The exact data used to train our deep convolutional neural networks (see our [research page](http://www.robots.ox.ac.uk/~vgg/research/text/)) is included in this torrent.

This is synthetically generated dataset which we found sufficient for training text recognition on real-world images

![Synthetic Data Engine processt](https://i.imgur.com/cqmgbUa.png)

This dataset consists of *9 million images* covering *90k English words*, and includes the training, validation and test splits used in our work.},
keywords= {},
terms= {},
license= {},
superseded= {},
url= {https://www.robots.ox.ac.uk/~vgg/data/text/}
}

</description>
<link>https://academictorrents.com/download/3d0b4f09080703d2a9c6be50715b46389fdb3af1</link>
</item>
</channel>
</rss>
