Extracting and Harnessing Interpretation in Data Mining
Abstract

Machine learning models have been widely applied in data mining due to their unprecedented prediction capabilities. However, machine learning is often criticized as a “black box” due to its opacity. To tackle the issue, interpretation techniques are needed to understand the working mechanism of models. Interpreting machine learning models, especially deep models, is a challenging problem in data mining because: (1) the definition is vague for interpretation, (2) the structures and information processing paths are convoluted for complex models. I propose to tackle the problem in three aspects. First, given a black-box model, a fundamental requirement of interpretation is to attribute its prediction to the important features. The obtained interpretation could be utilized to improve model robustness. Second, I propose to understand the global latent representations learned by the model to extract structural knowledge. Third, I develop interpretable network embedding models via disentangling latent representations. 

Speaker: Mr Ninghao LIU 
Date: 20 January 2021 (Wed)
Time: 10:00am – 11:00am
PosterClick here

Biography

Mr Ninghao Liu is a Ph.D. student in the Department of Computer Science and Engineering at Texas A&M University. He received his M.Sc. degree from the School of Electrical and Computer Engineering at Georgia Institute of Technology in 2015, and B.Eng. degree from the School of Electronic and Information Engineering at South China University of Technology in 2014. His research interests include explainable artificial intelligence, network analysis, anomaly detection and recommender systems.