The Institute of Scientific and Industrial Research, Osaka University


LAST UPDATE 2017/11/01

  • 研究者氏名
    Researcher Name

    原聡 Satoshi HARA
    助教 Assistant Professor
  • 所属
    Professional Affiliation

    The Institute of Scientific and Industrial Research, Osaka University

    Department of Reasoning for Intelligence
  • 研究キーワード
    Research Keywords

    Machine Learning
    Data Mining
    Artificial Intelligence
Research Subject
Improving Interpretability of Machine Learning Models

研究の背景 Background


Artificial Intelligence technologies such as machine learning are now getting ubiquitous in our daily lives. Most of popular machine learning models have very complex structure and it is therefore almost impossible for human to interpret the internal mechanism of the models. The examples are random forest, which is an ensemble of hundreds of decision trees, and deep neural networks, which consists of several network layers. These models are complete black-box, and it does not describe any reasons on their decisions.

研究の目標 Outcome


We aim at improving interpretability of complex machine learning models, such as random forest and deep neural network, by making their decision reasons to be transparent to human. In our current research, we focus on factorizing random forests into its building blocks and then reconstructing them so that the model to be interpretable. Specifically, we are working on developing effective and computationally efficient reconstruction algorithms.

研究図Research Figure

Fig.1. Two different data (shown in different colors, left figures) are classified using RandomForest (middle figures). RandomForest splits data into small fragments resulting in non-interpretable decisions. With our proposed method, the model can be simplified so that it is easy for human to interpret its decisions (right figures).

文献 / Publications

[1] Satoshi Hara and Kohei Hayashi. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. arXiv:1606.09066, 2016.

[2] Satoshi Hara and Kohei Hayashi. Making Tree Ensembles Interpretable. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, pages 81--85, 2016.