中 文 ENGLISH

学术交流

当前位置:首页 > 交流合作 > 学术交流

学术报告--美国德克萨斯大学(圣安东尼奥)田奇教授

浏览次数:次 发布时间:2017-01-03

报告时间:201713日(周三)上午10点

报告地点:九教北609会议室

报告题Person Re-identification: Benchmarks and Our Solutions

报告人:田奇教授 美国德克萨斯大学(圣安东尼奥)

 

【田奇教授简介】Qi Tian is currently a Full Professor in the Department of Computer Science, the University of Texas at San Antonio (UTSA). He was a tenured Associate Professor from 2008-2012 and a tenure-track Assistant Professor from 2002-2008. During 2008-2009, he took one-year Faculty Leave at Microsoft Research Asia (MSRA) as Lead Researcher in the Media Computing Group. Dr. Tian received his Ph.D. in ECE from University of Illinois at Urbana-Champaign (UIUC) in 2002 and received his B.E.    

 

【Abstract:】Person re-identification (re-id) is a promising way towards automatic video surveillance. As research hotspot in recent years, there has been an urgent demand for building a solid benchmarking framework, including comprehensive datasets and effective baselines. To benchmark a large scale person re-id dataset, we propose a new high quality frame-based dataset for person re-identification titled “Market-1501”, which contains over 32,000 annotated bounding boxes, plus a distractor set of over 500K images. Different from traditional datasets which use hand-drawn bounding boxes that are unavailable under realistic settings, we produce the dataset with Deformable Part Model (DPM) as pedestrian detector. Moreover, this dataset is collected in an open system, where each identity has multiple images under each camera. We propose an unsupervised Bag-of-Words representation and treat the person re-identification as a special task of image search, which is demonstrated very efficient and effective. To further push the person re-identification to practical applications, we propose a new video based dataset titled “MARS”, which is the largest video re-id dataset to date. Containing 1,261 identities and over 20,000 tracklets, it provides rich visual information compared to image-based datasets. The tracklets are automatically generated by the DPM as pedestrian detector and the GMMCP tracker. Extensive evaluation of the state-of-the-art methods including the space-time descriptors are presented. We further show that CNN in classification mode can be trained from scratch using the consecutive bounding boxes of each identity. Finally, we present “Person Re-identification in the Wild (PRW)” dataset for evaluating end-to-end re-id methods from raw video frames to the identification results. We address the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification accuracy and assessing the effectiveness of different detectors for re-identification. A discriminatively trained ID-discriminative Embedding (IDE) in the person subspace using convolutional neural network (CNN) features and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement are introduced to aid the identification.