Name | DL | Torrents | Total Size | Deep Learning [edit] | 50 | 963.86GB | 557 | 0 | Computer Vision [edit] | 79 | 1.41TB | 703 | 0 | PASCAL Visual Object Classes Challenge [edit] | 12 | 13.73GB | 103 | 0 |
pascal-context (4 files)
33_context_labels.tar.gz | 27.13MB |
33_labels.txt | 0.35kB |
instances.tar.gz | 25.09MB |
trainval.tar.gz | 30.71MB |
Type: Dataset
Tags: PASCAL
Bibtex:
Tags: PASCAL
Bibtex:
@article{, title= {PASCAL-Context Dataset}, journal= {}, author= {UCLA CCVL}, year= {}, url= {}, abstract= {This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL semantic segmentation task by providing annotations for the whole scene. The statistics section has a full list of 400+ labels. Every pixel has a unique class label. Instance information (i.e, different masks to separate different instances of the same class in the same image) are currently provided for the 20 PASCAL objects. Statistics Since the dataset is an annotation of PASCAL VOC 2010, it has the same statistics as those of the original dataset. Training and validation contains 10,103 images while testing contains 9,637 images. Usage Considerations The classes are not drawn from a fixed pool. Instead labelers were free to either select or type in what they believe to be the appropriate class and to determine what the appropriate object granularity is. We decided to merge/split some of the categories so the current number of categories is different from what we mentioned in the CVPR 2014 paper. When using this dataset it is important that you examine classes to ensure they match your intended use. For example, sand is often labeled independently despite also being considered ground. Those interested in ground may want to cluster sand and ground together along with other classes. Citation The Role of Context for Object Detection and Semantic Segmentation in the Wild Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, Alan Yuille IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 Acknowledgements We would like to acknowledge the support by Implementation of Technologies for Identification, Behavior, and Location of Human based on Sensor Network Fusion Program through the Korean Ministry of Trade, Industry and Energy (Grant Number: 10041629). We would also like to thank National Science Foundation for grant 1317376 (Visual Cortex on Silicon. NSF Expedition in Computing). We thank Viet Nguyen for coordinating and leading the efforts for cleaning up the annotations.}, keywords= {PASCAL}, terms= {} }