各部门、各单位:
应我校通信与信息工程学院(人工智能学院)的邀请,华东师范大学计算机科学与技术学院林绍辉副教授将于10月22日来我校做专题学术讲座,欢迎广大师生参加!报告的具体安排如下:
报告时间:2020年10月22日(周四)16:30
报告地点:腾讯会议253 922 334
报告题目:Deep neural networks compression and acceleration
摘 要:Deep neural networks (DNNs) have developed rapidly and achieved remarkable success in many artificial intelligence (AI) applications, such as image understanding, speech recognition and natural language processing, which have been one of the research focuses in AI. However, with the high performance improvement of DNNs, the networks have become deeper and wider, which significantly increases the number of parameters and computation complexity. How to compress and accelerate these large DNNs has received ever-increasing focus from both academic and industrial research. Aiming at the problem of parameter redundancy in DNNs, this talk presents general methods of low-rank decomposition, parameter pruning and knowledge distillation for DNNs compression and acceleration, especially for convolutional neural networks (CNNs) compression and acceleration.
报告人简介:Shaohui Lin is currently an associate researcher and a Zijiang Young Scholar in the School of Computer Science and Technology, East China Normal University (ECNU). He received Ph.D. from Xiamen University in June 2019. He was working as a postdoc researcher at National University of Singapore before joining ECNU. My research specialty is computer vision, machine learning and deep learning, especially compression and speeding-up of large capacity models. He is the first-author of about 10 scientific articles at top venues, including IEEE TPAMI, TNNLS, CVPR, IJCAI, and AAAI. He serves as reviewers for TPAMI, IJCV, TNNLS and TMM, CVPR and NeurIPS etc. He is the recipient of Outstanding Doctoral Dissertation Nomination Award, Chinese Association for Artificial Intelligence (CAAI), 2020.
特此通知。
科 研 处
通信与信息工程学院
2020年10月19日