中 文 ENGLISH

学术交流

当前位置:首页 > 交流合作 > 学术交流

学术报告--美国约翰霍普金斯大学的Trac D. Tran教授和加拿大西蒙弗雷泽大学Jie Liang教授

浏览次数:次 发布时间:2016-07-13

报告时间:2016715日(周五)上午9:00-12:00

报告地点:九教北609会议室

报告题Entropy Minimization for Sparse Recovery, Low-rank Approximation and Robust Principal Component Analysis

报告人:Prof. Trac D. Tran, Johns Hopkins University, USA

 

【Jie Liang 教授简介】Jie Liang received the B.E. and M.E. degrees from Xi'an Jiaotong University, China, the M.E. degree from National University of Singapore (NUS), and the PhD degree from the Johns Hopkins University, Baltimore, Maryland, USA, in 1992, 1995, 1998, and 2003, respectively. Since May 2004, he has been with the School of Engineering Science, Simon Fraser University, Vancouver, Canada, where he is currently a Professor and the Associate Director. 
 
    Jie Liang's research interests include Image and Video Coding, Multimedia Communications, Sparse Signal Processing, Computer Vision, and Machine Learning. He has served as an Associate Editor for the IEEE Transactions on Image Processing (TIP), IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), and IEEE Signal Processing Letters (SPL). He is a member of the IEEE Multimedia Systems and Applications (MSA) Technical Committee and Multimedia Signal Processing (MMSP) Technical Committee. He received the 2014 IEEE TCSVT Best Associate Editor Award, 2014 SFU Dean of Graduate Studies Award for Excellence in Leadership, and 2015 Canada NSERC Discovery Accelerator Supplements (DAS) Award. 

 

【报告概要】Deep neural networks generally involve some layers with millions of parameters, making them difficult to be deployed and updated on devices with limited resources such as mobile phones and other smart embedded systems. In this paper, we propose a scalable representation of the network parameters, so that different applications can select the most suitable bit rate of the network based on their own storage constraints. Moreover, when a device needs to upgrade to a high-rate network, the existing low-rate network can be reused, and only some incremental data are needed to be downloaded. We first hierarchically quantize the weights of a pre-trained deep neural network to enforce weight sharing. Next, we adaptively select the bits assigned to each layer given the total bit budget. After that, we retrain the network to fine-tune the quantized centroids. Experimental results show that our method can achieve scalable compression with graceful degradation in the performance.