Info hash | 84461687ecb08ce9d0f24b70d0528e4ae5d6966e |
Last mirror activity | 159d,22:53:39 ago |
Size | 279.01GB (279,013,071,677 bytes) |
Added | 2021-05-04 14:47:02 |
Views | 158 |
Hits | 996 |
ID | 4651 |
Type | multi |
Downloaded | 0 time(s) |
Uploaded by | abc25 |
Folder | ImageNet-21K-P |
Num files | 2 files [See full list] |
Mirrors | 0 complete, 0 downloading = 0 mirror(s) total [Log in to see full list] |
ImageNet-21K-P (2 files)
train.tar.gz | 266.47GB |
val.tar.gz | 12.54GB |
Type: Dataset
Tags: imagenet, deep learning, imagenet21K, imagenet-21K, pretraining, fall11_whole.tar
Bibtex:
Tags: imagenet, deep learning, imagenet21K, imagenet-21K, pretraining, fall11_whole.tar
Bibtex:
@article{, title= {ImageNet-21K-P dataset (processed from fall11_whole.tar)}, journal= {}, author= {https://arxiv.org/pdf/2104.10972}, year= {}, url= {}, abstract= {ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which contains more pictures and classes, is used less frequently for pretraining, mainly due to its complexity, and underestimation of its added value compared to standard ImageNet-1K pretraining. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. Via a dedicated preprocessing stage, utilizing WordNet hierarchies, and a novel training scheme called semantic softmax, we show that different models, including small mobile-oriented models, significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT. Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset.}, keywords= {imagenet, deep learning, imagenet21K, imagenet-21K, pretraining, fall11_whole.tar}, terms= {You have been granted access for non-commercial research/educational use. By accessing the data, you have agreed to the following terms. You (the "Researcher") have requested permission to use the ImageNet database (the "Database") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions: 1. Researcher shall use the Database only for non-commercial research and educational purposes. 2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database. 4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. 5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. 6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. 7. The law of the State of New Jersey shall apply to all disputes under this agreement.}, license= {}, superseded= {} }