Info hash | d5d80c1ad9d6b44b6e80c942414f1753bf9a1970 |
Last mirror activity | 2:00 ago |
Size | 387.89MB (387,885,021 bytes) |
Added | 2014-09-22 21:52:36 |
Views | 2158 |
Hits | 18106 |
ID | 2873 |
Type | multi |
Downloaded | 2363 time(s) |
Uploaded by | joecohen |
Folder | groundtruth |
Num files | 1365 files [See full list] |
Mirrors | 4 complete, 0 downloading = 4 mirror(s) total [Log in to see full list] |
groundtruth (1365 files)
yellowstone/Thumbs.db | 7.68kB |
yellowstone/Image48.jpg | 408.33kB |
yellowstone/Image47.jpg | 395.72kB |
yellowstone/Image46.jpg | 414.18kB |
yellowstone/Image45.jpg | 335.25kB |
yellowstone/Image44.jpg | 276.80kB |
yellowstone/Image43.jpg | 286.78kB |
yellowstone/Image42.jpg | 293.39kB |
yellowstone/Image41.jpg | 279.88kB |
yellowstone/Image40.jpg | 291.23kB |
yellowstone/Image39.jpg | 259.92kB |
yellowstone/Image38.jpg | 307.61kB |
yellowstone/Image37.jpg | 257.42kB |
yellowstone/Image36.jpg | 354.87kB |
yellowstone/Image35.jpg | 343.68kB |
yellowstone/Image34.jpg | 321.54kB |
yellowstone/Image33.jpg | 279.16kB |
yellowstone/Image32.jpg | 281.09kB |
yellowstone/Image31.jpg | 347.85kB |
yellowstone/Image30.jpg | 287.60kB |
yellowstone/Image29.jpg | 323.49kB |
yellowstone/Image28.jpg | 300.42kB |
yellowstone/Image27.jpg | 271.40kB |
yellowstone/Image26.jpg | 473.97kB |
yellowstone/Image25.jpg | 446.19kB |
yellowstone/Image24.jpg | 303.85kB |
yellowstone/Image23.jpg | 303.37kB |
yellowstone/Image22.jpg | 282.60kB |
yellowstone/Image21.jpg | 267.06kB |
yellowstone/Image20.jpg | 359.04kB |
yellowstone/Image19.jpg | 310.43kB |
yellowstone/Image18.jpg | 323.78kB |
yellowstone/Image17.jpg | 307.93kB |
yellowstone/Image16.jpg | 303.88kB |
yellowstone/Image15.jpg | 273.73kB |
yellowstone/Image14.jpg | 291.06kB |
yellowstone/Image13.jpg | 363.00kB |
yellowstone/Image12.jpg | 305.38kB |
yellowstone/Image11.jpg | 256.05kB |
yellowstone/Image10.jpg | 242.02kB |
yellowstone/Image09.jpg | 247.60kB |
yellowstone/Image08.jpg | 271.74kB |
yellowstone/Image07.jpg | 255.69kB |
yellowstone/Image06.jpg | 278.66kB |
yellowstone/Image05.jpg | 215.95kB |
yellowstone/Image04.jpg | 236.51kB |
yellowstone/Image03.jpg | 256.93kB |
yellowstone/Image02.jpg | 362.14kB |
yellowstone/Image01.jpg | 410.79kB |
Type: Dataset
Tags:
Bibtex:
Tags:
Bibtex:
@article{, title = {Object and Concept Recognition for Content-Based Image Retrieval (CBIR)}, journal = {}, author = {University of Washington }, year = {}, url = {http://www.cs.washington.edu/research/imagedatabase/}, abstract = {Our groundtruth database consists of 21 datasets of outdoor scene images, many including a text file containing a list of visible objects for each image. Project Summary With the advent of powerful but inexpensive computers and storage devices and with the availability of the World Wide Web, image databases have moved from research to reality. Search engines for finding images are available from commercial concerns and from research institutes. These search engines can retrieve images by keywords or by image content such as color, texture, and simple shape properties. Content-based image retrieval is not yet a commercial success, because most real users searching for images want to specify the semantic class of the scene or the object(s) it should contain. The large commercial image providers are still using human indexers to select keywords for their images, even though their databases contain thousands or, in some cases, millions of images. Automatic object recognition is needed, but most successful computer vision object recognition systems can only handle particular objects, such as industrial parts, that can be represented by precise geometric models. Content-based retrieval requires the recognition of generic classes of objects and concepts. A limited amount of work has been done in this respect, but no general methodology has yet emerged. The goal of this research is to develop the necessary methodology for automated recognition of generic object and concept classes in digital images. The work will build on existing object-recognition techniques in computer vision for low-level feature extraction and will design higher-level relationship and cluster features and a new unified recognition methodology to handle the difficult problem of recognizing classes of objects, instead of particular instances. Local feature representations and global summaries that can be used by general-purpose classifiers will be developed. A powerful new hierarchical multiple classifier methodology will provide the learning mechanism for automating the development of recognizers for additional objects and concepts. The resulting techniques will be evaluated on several different large image databases, including commercial databases whose images are grouped into broad classes and a ground-truth database that provides a list of the objects in each image. The results of this work will be a new generic object recognition paradigm that can immediately be applied to automated or semi-automated indexing of large image databases and will be a step forward in object recognition. Project Impact The results of this project will have an impact on both image retrieval from large databases and object recognition in general. It will target the recognition of classes of common objects that can appear in image databases of outdoor scenes. It will develop object class recognizers and a new learning formalism for automating the production of new classifiers for new classes of objects. It will also develop new representations for the image features that can be used to recognize these objects. It will allow content-based retrieval to become an important method for accessing real, commercial image databases, which today use only human index terms for retrieval. Goals, Objectives and Targeted Activities In the first year of the grant, we developed the feature extraction routines to extract features capable of recognizing an initial set of common objects representing a variety of the types of objects that appear in outdoor scenes, including city scenes and noncity scenes. We designed generic object recognition algorithms for the initial object set. We have developed such algorithms for vehicles, boats, and buildings, and have designed new high-level image features including symmetry features and cluster features. In the second year, We designed a unified representation for the image features called abstract regions. These are regions of the image that can come about from many different processes: color clustering, texture clustering, line-segment clustering, symmetry detection, and so on. All abstract regions will have a common set of features, while each different category will have its own special features. Our current emphasis is on using abstract features along with learning methodologies to recognize comon objects. Area Background The area of content-based image retrieval is a hybrid research area that requires knowledge of both computer vision and of database systems. Large image databases are being collected, and images from these collections made available to users in advertising, marketing, entertainment, and other areas where images can be used to enhance the product. These images are generally organized loosely by category, such as animals, natural scenes, people, and so on. All image indexing is done by human indexers who list the important objects in an image and other terms by which users may wish to access it. This method is not suitable for today's very large image databases. Content-based retrieval systems utilize measures that are based on low-level attributes of the image itself, including color histograms, color composition, and texture. State-of-the-art research focuses on more powerful measures that can find regions of an image corresponding to known objects that users wish to retrieve. There has been some success in finding human faces of different selected sizes, human bodies, horses, zebras and other texture animals with known patterns, and such backgrounds as jungles, water, and sky. Our research will focus on a unified methodology for feature representation and object class recognition. This work will lead to automatic indexing capabilities in the future.} }