当前位置: 首页  2014贵州省先进计算与医疗信息服务工程实验室  通知公告
实验室周例会报告预告--20181218 陈锦秋 陈雯

报告题目:Visual interpretability for deep learning: a survey

报 告 人:陈锦秋

报告时间:2018年12月18日 下午 2:00

报告地点:贵州大学北校区博学楼603室

报告内容摘要: 

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.


报告题目:Interpreting CNNs via Decision Trees

报 告 人:陈雯

报告时间:2018年12月18日 下午 3:00

报告地点:贵州大学北校区博学楼603室

报告内容摘要:

This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pretrained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e. which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method.



【关闭本页】 【返回顶部】