博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
常用图像数据集:标注、检索
阅读量:4978 次
发布时间:2019-06-12

本文共 23264 字,大约阅读时间需要 77 分钟。

目录

 

1.搜狗实验室数据集:

互联网图片库来自sogou图片搜索所索引的部分数据。其中收集了包括人物、动物、建筑、机械、风景、运动等类别,总数高达2,836,535张图片。对于每张图片,数据集中给出了图片的原图、缩略图、所在网页以及所在网页中的相关文本。200多G

2

IMAGECLEF致力于位图片相关领域提供一个基准(检索、分类、标注等等)  。从2003年开始每年举行一次比赛.

3

Xiaorong Li 维护的数据集。PhD ,Intelligent Systems Lab Amsterdam.research on video and image retrieval.

 

  • : A collection of 3.5 million social-tagged images.
     
  • : A ground-truth set for tag-based social image retrieval.
     
  • : A ground-truth set for retrieving bi-concepts (concept pairs) in unlabeled images.
     
  • : A set of negative examples automatically harvested from social-tagged images for 20 PASCAL VOC concepts.
4
wikipedia featured articles 函数图片(以及特征)以及对应的wiki文本。可以看看文章A New Approach to Cross-Modal Multimedia Retrieval,还有一批文章On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval不过还没有下载链接

 

 

5

To our knowledge, this is the largest real-world web image dataset comprising over 269,000 images with over 5,000 user-provided tags, and ground-truth of 81 concepts for the entire dataset. The dataset is much larger than the popularly available Corel and Caltech 101 datasets. Though some datasets comprise over 3 million images, they only have ground-truth for a small fraction of images. Our proposed NUS-WIDE dataset has the ground-truth for the entire dataset.

6.

7.

Jegou的数据集,不过Jegou是专门做CBIR的,图像有ground truth,没有标注。

8.

vgg的osford building dataset。也是专门CBIR的数据。

9.

The dataset for the Microsoft Image Grand Challenge on Image Retrieval 

 

另外介绍cvpaper上的整理的数据集

 

Participate in Reproducible Research

Detection

Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
LabelMe is a web-based image annotation tool that allows researchers to label images and share the annotations with the rest of the community. If you use the database, we only ask that you contribute to it, from time to time, by using the labeling tool.
1521 images with human faces, recorded under natural conditions, i.e. varying illumination and complex background. The eye positions have been set manually.
Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
Pictures of objects belonging to 101 categories
Pictures of objects belonging to 256 categories
15,560 pedestrian and non-pedestrian samples (image cut-outs) and 6744 additional full images not containing pedestrians for bootstrapping. The test set contains more than 21,790 images with 56,492 pedestrian labels (fully visible or partially occluded), captured from a vehicle in urban traffic.
CVC Pedestrian Datasets
CBCL Pedestrian Database
CBCL Face Database
CBCL Car Database
CBCL Street Database
A large set of marked up images of standing or walking people
A set of car and non-car images taken in a parking lot nearby INRIA
A set of horse and non-horse images
3D skeletons and segmented regions for 1000 people in images
A large-scale vehicle detection dataset
10000 images of natural scenes, with 37 different logos, and 2695 logos instances, annotated with a bounding box.
10000 images of natural scenes grabbed on Flickr, with 2695 logos instances cut and pasted from the BelgaLogos dataset.
The dataset FlickrLogos-32 contains photos depicting logos and is meant for the evaluation of multi-class logo detection/recognition as well as logo retrieval methods on real-world images. It consists of 8240 images downloaded from Flickr.
30000+ frames with vehicle rear annotation and classification (car and trucks) on motorway/highway sequences. Annotation semi-automatically generated using laser-scanner data. Distance estimation and consistent target ID over time available.
Phos is a color image database of 15 scenes captured under different illumination conditions. More particularly, every scene of the database contains 15 different images: 9 images captured under various strengths of uniform illumination, and 6 images under different degrees of non-uniform illumination. The images contain objects of different shape, color and texture and can be used for illumination invariant feature detection and selection.
California-ND contains 701 photos taken directly from a real user's personal photo collection, including many challenging non-identical near-duplicate cases, without the use of artificial image transformations. The dataset is annotated by 10 different subjects, including the photographer, regarding near duplicates.

Classification

Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
Pictures of objects belonging to 101 categories
Pictures of objects belonging to 256 categories
A dataset for testing object class detection algorithms. It contains 255 test images and features five diverse shape-based classes (apple logos, bottles, giraffes, mugs, and swans).
17 Flower Category Dataset
A dataset for Attribute Based Classification. It consists of 30475 images of 50 animals classes with six pre-extracted feature representations for each image.
Dataset of 20,580 images of 120 dog breeds with bounding-box annotation, for fine-grained image categorization.

Recognition

Face and Gesture Recognition Working Group FGnet
Face and Gesture Recognition Working Group FGnet
9971 images of 100 people
A database of face photographs designed for studying the problem of unconstrained face recognition
Traffic Lights Recognition, Lara's public benchmarks.
The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects.
The data set contains 3,425 videos of 1,595 different people. The shortest clip duration is 48 frames, the longest clip is 6,070 frames, and the average length of a video clip is 181.3 frames.
The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system.
This dataset contains 250 pedestrian image pairs + 775 additional images captured in a busy underground station for the research on person re-identification.
Face tracks, features and shot boundaries from our latest CVPR 2013 paper. It is obtained from 6 episodes of Buffy the Vampire Slayer and 6 episodes of Big Bang Theory.
ChokePoint is a video dataset designed for experiments in person identification/verification under real-world surveillance conditions. The dataset consists of 25 subjects (19 male and 6 female) in portal 1 and 29 subjects (23 male and 6 female) in portal 2.

Tracking

Walking pedestrians in busy scenarios from a bird eye view
Three pedestrian crossing sequences
The set was recorded in Zurich, using a pair of cameras mounted on a mobile platform. It contains 12'298 annotated pedestrians in roughly 2'000 frames.
BMP image sequences.
Data sets for tracking   and   in aerial image sequences.
MIT traffic data set is for research on activity analysis and crowded scenes. It includes a traffic video sequence of 90 minutes long. It is recorded by a stationary camera.

Segmentation

Ground truth database of 50 images with: Data, Segmentation, Labelling - Lasso, Labelling - Rectangle
Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
Cows for object segmentation, Five video sequences for motion segmentation
Geometric Context Dataset: pixel labels for seven geometric classes for 300 images
This dataset contains videos of crowds and other high density moving objects. The videos are collected mainly from the BBC Motion Gallery and Getty Images website. The videos are shared only for the research purposes. Please consult the terms and conditions of use of these videos from the respective websites.
Contains hand-labelled pixel annotations for 38 groups of images, each group containing a common foreground. Approximately 17 images per group, 643 images total.
200 gray level images along with ground truth segmentations
Image segmentation and boundary detection. Grayscale and color segmentations for 300 images, the images are divided into a training set of 200 images, and a test set of 100 images.
328 side-view color images of horses that were manually segmented. The images were randomly collected from the WWW.
10 videos as inputs, and segmented image sequences as ground-truth

Foreground/Background

For evaluating background modelling algorithms
Foreground/Background segmentation and Stereo dataset from Microsoft Cambridge
The SABS (Stuttgart Artificial Background Subtraction) dataset is an artificial dataset for pixel-wise evaluation of background models.

Saliency Detection ()

120 Images / 20 Observers (Neil D. B. Bruce and John K. Tsotsos 2005).
27 Images / 40 Observers (O. Le Meur, P. Le Callet, D. Barba and D. Thoreau 2006).
100 Images / 31 Observers (Kootstra, G., Nederveen, A. and de Boer, B. 2008).
101 Images / 29 Observers (van der Linde, I., Rajashekar, U., Bovik, A.C., Cormack, L.K. 2009).
912 Images / 14 Observers (Krista A. Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba and Aude Oliva 2009).
758 Images / 75 Observers (R. Subramanian, H. Katti, N. Sebe1, M. Kankanhalli and T-S. Chua 2010).
235 Images / 19 Observers (Jian Li, Martin D. Levine, Xiangjing An and Hangen He 2011).
ECSSD contains 1000 natural images with complex foreground or background. For each image, the ground truth mask of salient object(s) is provided.

Video Surveillance

For the CAVIAR project a number of video clips were recorded acting out the different scenarios of interest. These include people walking alone, meeting with others, window shopping, entering and exitting shops, fighting and passing out and last, but not least, leaving a package in a public place.
ViSOR contains a large set of multimedia data and the corresponding annotations.

Multiview

Multiview stereo data sets: a set of images
Dinosaur, Model House, Corridor, Aerial views, Valbonne Church, Raglan Castle, Kapel sequence
Oxford colleges
Temple, Dino
Venus de Milo, Duomo in Pisa, Notre Dame de Paris
Dataset provided by Center for Machine Perception
CVLab dense multi-view stereo image database
Objects viewed from 144 calibrated viewpoints under 3 different lighting conditions
Images from 19 sites collected from a helicopter flying around Providence, RI. USA. The imagery contains approximately a full circle around each site.
24 scenarios recorded with 8 IP video cameras. The first 22 first scenarios contain a fall and confounding events, the last 2 ones contain only confounding events.

Action

This dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC and ESPN. The video sequences were obtained from a wide range of stock footage websites including BBC Motion gallery, and GettyImages.
This dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted on a gimbal.The collection represents a diverse pool of actions featured at different heights and aerial viewpoints. Multiple instances of each action were recorded at different flying altitudes which ranged from 400-450 feet and were performed by different actors.
It contains 11 action categories collected from YouTube.
Walk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.
UCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.
The Action Similarity Labeling (ASLAN) Challenge.
The dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture 2-3 times.
Contains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes and indoors.
Hollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total.
This dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which were recorded by consumer hand-held digital camera with varying view point.
The Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.
Surveillance-type videos
The dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets.
Collected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google videos. The dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips.
Dataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.
Fully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people preparing two mixed salads each (4.5h of annotated data). Annotated activities correspond to steps in the recipe and include phase (pre-/ core-/ post) and the ingredient acted upon.

Human pose/Expression

Dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies.

Image stitching

Images and parameters for registeration

Medical

Collection of endoscopic and laparoscopic (mono/stereo) videos and images

Misc

ZuBuD Image Database contains over 1005 images about Zurich city building.
The mall dataset was collected from a publicly accessible webcam for crowd counting and activity profiling research.
A busy traffic dataset for research on activity analysis and behaviour understanding.

 

CVOnline的数据集

 

Index by Topic


Action Databases

  1.  - fully annotated 4.5 hour dataset of RGB-D video + accelerometer data, capturing 25 people preparing two mixed salads each (Dundee University, Sebastian Stein)
  2.  database (Orit Kliper-Gross)
  3.  (Ferda Ofli)
  4.  (Scott Blunsden, Bob Fisher, Aroosha Laghaee)
  5.  (Janez Pers)
  6.  - synchronised video, depth and skeleton data for 20 gaming actions captured with Microsoft Kinect (Victoria Bloom)
  7.  - 650 3D action recognition in the wild videos, 14 action classes (Simon Hadfield)
  8.  (Marcin Marszalek, Ivan Laptev, Cordelia Schmid)
  9. : Synchronized Video and Motion Capture Dataset for Evaluation of Articulated Human Motion (Brown University)
  10.  (Hansung Kim)
  11.  (Paul Hosner)
  12.  (INRIA)
  13.  - 7 types of human activity videos taken from a first-person viewpoint (Michael S. Ryoo, JPL)
  14.  (KTH CVAP lab)
  15.  - 2 cameras, annotated, depth images (Christian Wolf, et al)
  16.  - Multicamera Human Action Video Data (Hossein Ragheb)
  17.  (Oxford Visual Geometry Group)
  18.  (Ross Messing)
  19.  (Michael S. Ryoo, J. K. Aggarwal, Amit K. Roy-Chowdhury)
  20.  (Michael S. Ryoo, J. K. Aggarwal, Amit K. Roy-Chowdhury)
  21.  (Moritz Tenorth, Jan Bandouch)
  22.  (Alonso Patron-Perez)
  23.  (Univ of Central Florida)
  24.  (Univ of Central Florida)
  25.  (Kishore Reddy)
  26.  101 action classes, over 13k clips and 27 hours of video data (Univ of Central Florida)
  27.  (Univ of Central Florida)
  28.  Aerial camera, Rooftop camera and Ground camera (UCF Computer Vision Lab)
  29.  (Amit K. Roy-Chowdhury)
  30.  (Marco Cristani)
  31.  (B. Bhanu, G. Denina, C. Ding, A. Ivers, A. Kamal, C. Ravishankar, A. Roy-Chowdhury, B. Varda)
  32.  (userID: VIHASI password: virtual$virtual) (Hossein Ragheb, Kingston University)
  33.  Kinect dataset for exercise actions (Ceyhun Akgul)
  34.  - 88 open-source YouTube cooking videos with annotations (Jason Corso)
  35.  (Univ. of West Virginia)

Biological/Medical

  1.  (Lauge Sorensen)
  2.  (Eric Ehrsam)
  3.  (Allen Institute for Brain Science et al)
  4.  (Lappeenranta Univ of Technology)
  5.  (Univ of Utrecht)
  6.  (Mammographic Image Analysis Society)
  7.  (Nicholas Edelman)
  8.  (Univ of Groningen)
  9.  (Digital Imaging Group of London Ontario, Shuo Li)
  10.  (Univ of Central Florida)
  11.  - 120 3D vascular tree like structures with ground truth (Mengliu Zhao, Ghassan Hamarneh)
  12.  (Alexander Andreopoulos)

Face Databases

  1.  - 76500 frames of 17 persons using Kinect RGBD with eye positions (Sebastien Marcel)
  2.  (Mobile Biometry MOBIO )
  3.  (Univ of Surrey)
  4.  (Lijun Yin, Peter Gerhardstein and teammates)
  5.  (BioID group)
  6.  - 1000 high quality, dynamic 3D scans of faces, recorded while pronouncing a set of English sentences.
  7.  (CMU/MIT)
  8.  (CMU/MIT)
  9.  (CMU/MIT)
  10.  (Simon Baker)
  11.  (Ajmal Mian)
  12.  (FRVT - Face Recognition Vendor Test)
  13.  (Neeraj Kumar, P. N. Belhumeur, and S. K. Nayar)
  14.  (University of Massachusetts Computer Vision Laboratory)
  15.  (Face and Gesture Recognition Research Network)
  16.  (USA National Institute of Standards and Technology)
  17.  (Michael J. Lyons)
  18.  - unconstrained face recognition.  - original images, but aligned using "deep funneling" method. (University of Massachusetts, Amherst)
  19.  (Timothy Cootes)
  20.  (Ethan Meyers)
  21.  (University of North Carolina Wilmington)
  22.  (Center for Biological and Computational Learning)
  23.  (USA National Institute of Standards and Technology)
  24.  (ATT Cambridge Labs)
  25.  (Oxford Visual Geometry Group)
  26.  (Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, and Shree K. Nayar)
  27.  - Surveillance Cameras Face Database (Mislav Grgic, Kresimir Delac, Sonja Grgic, Bozidar Klimpak))
  28.  (Igor Barros Barbosa)
  29.  - University of Buffalo kinship verification and recognition database
  30.  (Surrey University)
  31.  (A. Georghaides)
  32.  (A. Georghaides)

Fingerprints

  1.  (University of Bologna)
  2.  (University of Bologna)
  3.  - a subset of FVC (Fingerprint Verification Competition) 2002 and 2004 fingerprint image databases, manually extracted minutiae data & associated documents (Umut Uludag)
  4.  (USA National Institute of Standards and Technology)
  5. (SPD 2010 committee)

General Images

  1.  (Swiss Federal Institute of Technology)
  2.  (Nathan Jacobs)
  3.  (Ben Kimia)
  4.  (F. Yasuma, T. Mitsunaga, D. Iso, and S.K. Nayar)
  5.  (Bob Fisher et al)
  6.  (David H. Foster)
  7.  (David H. Foster)
  8.  (Li Fei-Fei, Jia Deng, Hao Su, Kai Li)
  9.  (Alex Berg, Jia Deng, Fei-Fei Li)
  10.  (Ohio State Team)
  11.  (Adriana Olmos and Fred Kingdom)
  12.  79 million 32x32 color images (Fergus, Torralba, Freeman)

Gesture Databases

  1.  (Face and Gesture Recognition Research Network)
  2.  (Euripides G.M. Petrakis)
  3.  (Sebastien Marcel)
  4.  - 2160 RGBD hand gesture sequences, 6 subjects, 10 gestures, 3 postures, 3 backgrounds, 2 illuminations (Ling Shao)

Image, Video and Shape Database Retrieval

  1.  (Ben Kimia)
  2.  (Michael Grubinger)
  3.  (Hugo Jair Escalante)
  4.  (Stefanie Nowak)
  5.  - multi-label classification challenge in Flickr photos
  6.  (Siddiqi, Zhang, Macrini, Shokoufandeh, Bouix, Dickinson)
  7.  (USA National Institute of Standards and Technology)
  8.  (USA National Institute of Standards and Technology)
  9.  (USA National Institute of Standards and Technology)
  10.  (Princeton Shape Retrieval and Analysis Group)
  11.  - millions of images and text documents for "cross-media" retrieval (Yi Yang)
  12.  (Bronstein, Bronstein, Kimmel)

Object Databases

  1.  (Ajmal Mian)
  2.  (University of Amsterdam/Intelligent Sensory Information Systems)
  3.  (Li Fei-Fei, Marco Andreeto, Marc'Aurelio Ranzato)
  4.  (Columbia University)
  5.  (Gabriele Peters, Universiteit Dortmund)
  6.  (Ruhr-Universitat Bochum)
  7.  (A. Pinz)
  8.  (Fredrik Viksten and Per-Erik Forssen)
  9.  (Antonio Criminisi, Pushmeet Kohli, Tom Minka, Carsten Rother, Toby Sharp, Jamie Shotton, John Winn)
  10.  (Liu, Sun Zheng, Tang, Shum)
  11.  (Center for Biological and Computational Learning)
  12.  (Stan Bileschi)
  13.  (Hossein Mobahi)
  14.  (NYU)
  15.  (PASCAL Consortium)
  16.  (PASCAL Consortium)
  17.  (PASCAL Consortium)
  18.  (PASCAL Consortium)
  19.  (PASCAL Consortium)
  20.  (PASCAL Consortium)
  21.  Category classification, detection, and segmentation, and still-image action classification (PASCAL Consortium)
  22.  (UIUC)
  23.  (S. Savarese and L. Fei-Fei)
  24.  (Emanuele Rodola)

People, Pedestrian, Eye/Iris, Template Detection/Tracking Databases

  1.  (L. Igual, A. Lapedriza, R. Borràs from UB, CVC and UOC, Spain)
  2.  (P. Dollar, C. Wojek, B. Schiele and P. Perona)
  3.  (Chinese Academy of Sciences)
  4.  (Chinese Academy of Sciences, T. N. Tan, Z. Sun)
  5.  (CAVIAR team/Edinburgh University - EC project IST-2001-37540)
  6.  21790 images with 56492 pedestrians plus empty scenes (M. Enzweiler, D. M. Gavrila)
  7.  (RobeSafe + Jesus Nuevo-Chiquero)
  8.  (Bob Fisher, Bashia Majecka, Gurkirt Singh, Rowland Sillito)
  9.  (Stefan Winkler)
  10.  database of 27 human attributes (Gaurav Sharma, Frederic Jurie)
  11.  (Navneet Dalal)
  12. (Sebastian Lieberknecht)
  13.  (Center for Biological and Computational Learning)
  14.  (Judd et al)
  15.  (Patrick J. Flynn)
  16.  (Reading University & James Ferryman)
  17.  (Reading University & James Ferryman)
  18.  (Reading University & James Ferryman)
  19.  (University of Beira)
  20.  (Saad Ali)
  21.  (Saad Ali)
  22.  (Neil Bruce)

Segmentation

  1.  (Sharon Alpert, Meirav Galun, Ronen Basri, Achi Brandt)
  2.  (David Martin and Charless Fowlkes)
  3.  (C. Rother, V. Kolmogorov, A. Blake, M. Brown)
  4.  (Bryan Russell, Antonio Torralba, Kevin Murphy, William Freeman)

Surveillance

  1.  (Andrea Cavallaro)
  2.  (INRIA Orion Team and others)
  3.  (Zsolt Husz)
  4.  (Queen Mary University London)
  5.  - synthetic trajectory datasets with outliers (Univ of Udine Artificial Vision and Real Time Systems Laboratory)

Textures

  1.  (textures.forrest.cz)
  2.  (Columbia & Utrecht Universities)
  3.  (Renaud Piteri, Mark Huiskes and Sandor Fazekas)
  4.  (Oulu University)
  5.  (Mikes, Haindl)
  6.  - fabrics, grains, etc.
  7.  (MIT Media Lab)

General Videos

  1.  - 156,823 videos (2,907,447 keyframes) crawled from YouTube videos (Yi Yang)

Other Collections

  1.  (Multitel)
  2.  (Carnegie Mellon Univ)
  3.  (ETH Zurich, Computer Vision Lab)
  4.  (Bastian Leibe)
  5.  (Sealeen Ren, Benjamin Yao, Michael Yang)
  6.  (Oxford Visual geometry Group)
  7.  (Pilot European Image Processing Archive)
  8.  (Univ of Bern, Computer Vision and Artificial Intelligence)
  9.  (Keith Price)
  10.  (USC Signal and Image Processing Institute)

Miscellaneous

    1.  (Guillaume Lavoue)
    2.  (Mikkel B. Stegmann)
    3.  (Ajmal Mian)
    4.  (Brostow, Shotton, Fauqueur, Cipolla)
    5.  (Yalin Bastanlar)
    6.  (Teo de Campos - 
    7.  (Ullah, Pronobis, Caputo, Luo, and Jensfelt)
    8.  (M.D. Grossberg and S.K. Nayar)
    9.  (Jinwei Gu, Ravi Ramamoorthi, Peter Belhumeur, Shree Nayar)
    10.  (Christoph Strecha)
    11.  (Henrik Aanaes)
    12. : .enpeda.. Image Sequence Analysis Test Site (Auckland University Multimedia Imaging Group)
    13.  - 8240 images of 32 product logos (Stefan Romberg)
    14.  (Allan Hanbury)
    15.  (Derek Hoiem)
    16.  (Stefan Winkler)
    17.  (Krystian Mikolajczyk)
    18.  (INRIA Rhone-Alpes)
    19.  (INRIA Rhone-Alpes)
    20.  (Geiger, Lenz, Urtasun)
    21.  (Andreas Nuechter)
    22.  (Per-Erik Forssen and Erik Ringaby)
    23.  (Daniel Scharstein and Richard Szeliski)
    24.  (Michael Black)
    25.  (ESAT-PSI/VISICS,FGAN-FOM,EPFL/IC/ISIM/CVLab)
    26.  (National Cancer Institute)
    27.  - prostate images (National Cancer Institute)
    28.  (Helin Dutagaci, Afzal Godil)
    29.  (USDA Natural Resources Conservation Service)
    30.  (Andrew Stein)
    31.  (Gary Marchionini, Barbara M. Wildemuth, Gary Geisler, Yaxiao Song)
    32.  (Gamhewage Chaminda de Silva)
    33. : Artistic images of prints of well known paintings, including detail annotations. A benchmark for automatic annotation and retrieval tasks with this database was published at ECCV. (Nuno Miguel Pinho da Silva)
    34.  (Rawseeds Project)
    35.  - 3D point clouds from robotic experiments of scenes (Osnabruck and Jacobs Universities)
    36.  (Jean-Philippe Tarel, et al)
    37.  - 66 views of 45 objects
    38.  (Oisin Mac Aodha)
    39.  (Manuela Chessa)
    40.  (Francesco Vivarelli)
    41.  - a collection of ground-truth files based on the extraction of violent events in movies
    42.  (S. Narasimhan, C. Wang. S. Nayar, D. Stolyarov, K. Garg, Y. Schechner, H. Peri)

转载于:https://www.cnblogs.com/haoyul/p/5693066.html

你可能感兴趣的文章