PASCAL Visual Object Classes Challenge 2007 (VOC2007) VOCtest_06-Nov-2007.tar
Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.

VOCtest_06-Nov-2007.tar451.02MB
Type: Dataset
Tags:

Bibtex:
@misc{pascal-voc-2007,
author= {Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.},
title= {PASCAL Visual Object Classes Challenge 2007 (VOC2007) VOCtest_06-Nov-2007.tar},
howpublished= {http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html},
keywords= {},
abstract= {==Introduction

The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are:

Person: person
Animal: bird, cat, cow, dog, horse, sheep
Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor
There will be two main competitions, and two smaller scale "taster" competitions.

==Main Competitions

Classification: For each of the twenty classes, predicting presence/absence of an example of that class in the test image.
Detection: Predicting the bounding box and label of each object from the twenty target classes in the test image.
 
20 classes: aeroplane	bicycle	bird	boat	bottle	bus	car	cat	chair	cow dining table	dog	horse	motorbike	person	potted plant	sheep	sofa	train	tv/monitor
 
Participants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the twenty object classes. The challenge allows for two approaches to each of the competitions:

Participants may use systems built or trained using any methods or data excluding the provided test sets.
Systems are to be built or trained using only the provided training/validation data.
The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set.

Taster Competitions

Segmentation: Generating pixel-wise segmentations giving the class of the object visible at each pixel, or "background" otherwise.
		
Person Layout: Predicting the bounding box and label of each part of a person (head, hands, feet).
	
Participants may enter either (or both) of these competitions.

The VOC2007 challenge has been organized following the successful VOC2006 and VOC2005 challenges. Compared to VOC2006 we have increased the number of classes from 10 to 20, and added the taster challenges. These tasters have been introduced to sample the interest in segmentation and layout.

==Data

The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online.

Annotation was performed according to a set of guidelines distributed to all annotators. These guidelines can be viewed here.

The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission.

In the second stage, the test set will be made available for the actual competition. As in the VOC2006 challenge, no ground truth for the test data will be released until after the challenge is complete.

The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 9,963 images, containing 24,640 annotated objects.},
terms= {},
license= {},
superseded= {},
url= {}
}


Send Feedback