Loading [Contrib]/a11y/accessibility-menu.js

Kyushu University Institute of Mathematics for Industry

Explainable AI through Waveform Patterns

Akihiro YAMAGUCHI

Degree: PhD(Information Science)(Nagoya University)

Research interests: Machine Learning, XAI, Time-series Data Mining, Interpretability

 My main research interests are in the area of explainable machine learning and time-series data mining, with applications to the infrastructure and manufacturing sectors. In these industries, challenges beyond AI’s predictive accuracy often arise:
C1. → Field experts are knowledgeable in waveforms and seek transparency in AI decision-making.
C2. → Equipment typically operates normally, making it difficult to collect anomalies.
To address these industrial challenges, we have been proposing and developing machine learning methods that discover local waveform patterns used in AI decisions. Two approaches are introduced below:

(1) Shapelet Learning
 In data mining and machine learning, automatic classification of time-series data (e.g., normal/abnormal) has been widely studied. Shapelet learning refers to jointly learning local and interclass-discriminative waveform patterns, called “shapelets,” and a classifier. This method is formulated as a continuous optimization problem in which the number of shapelets K and their length L are given in advance, the shape S is a K × L matrix, and S is found so as to minimize the classification error. This enables experts to interpret the AI’s decision rationales by associating shapelets with mechanical phenomena, such as loose screws causing faults, thereby addressing C1.

Learning shapelets for time-series classification

 However, conventional methods require anomaly data for training, failing to address C2. To address both challenges C1&2, we developed an anomaly detection method that learns shapelets only from normal data and applied it to substation equipment. When examining the parts that most diverge from normal shapelets, we found slightly gentler slopes in anomalous waveforms, aligning with expert knowledge about equipment slowdown. This interpretability fosters experts’ trust in AI diagnostics.

Decision rationale for shapelet learning when applied to substation equipment diagnosis

(2) Counterfactual Waveform Generation
 While some black-box methods achieve high accuracy, we have also confirmed their accurate anomaly detection in our industries. This led us to explore XAI that separates decision-making and explanation methods for independent improvement. In particular, the counterfactual explanation can generate local waveform patterns that can be used as the AI’s decision (classification/anomaly detection) rationales, like shapelet learning. Applied to a substation equipment with anomaly involving delayed shock absorption, this method has generated counterfactual waveforms that matched expert assessments.

Decision rationale by counterfactual waveforms

Decision rationale by counterfactual waveforms