TY - JOUR
T1 - Explaining the decisions of power quality disturbance classifiers using latent space features
AU - Machlev, Ram
AU - Perl, Michael
AU - Caciularu, Avi
AU - Belikov, Juri
AU - Levy, Kfir Yehuda
AU - Levron, Yoash
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/6
Y1 - 2023/6
N2 - Deep learning techniques have recently demonstrated exceptional performance when used for Power Quality Disturbance (PQD) classification. However, a practical obstacle is that power system professionals do not fully trust the outputs of these techniques, if they cannot understand the reasons for their decisions. Meanwhile, in the last couple of years Explainable Artificial Intelligence (XAI) techniques have been used to improve the explainability of machine learning models, in order to make their outputs easier to understand. In this paper we provide a new XAI technique for explaining the decisions of PQD classifiers, by projecting the input data into a space of lower dimension, which is known as the latent space. The method operates as follows: first, a latent space encoder–decoder is trained based on the training set. Then, for each input, its features in the latent space are scored and ranked based on how their modifications effect the classifier output. Finally, the features’ scoring vector is transformed into the original feature space, and is used to explain the classifier's outputs. By adopting this method, the PQD classifier results are more transparent and easier to interpret, when compared to recently developed XAI techniques.
AB - Deep learning techniques have recently demonstrated exceptional performance when used for Power Quality Disturbance (PQD) classification. However, a practical obstacle is that power system professionals do not fully trust the outputs of these techniques, if they cannot understand the reasons for their decisions. Meanwhile, in the last couple of years Explainable Artificial Intelligence (XAI) techniques have been used to improve the explainability of machine learning models, in order to make their outputs easier to understand. In this paper we provide a new XAI technique for explaining the decisions of PQD classifiers, by projecting the input data into a space of lower dimension, which is known as the latent space. The method operates as follows: first, a latent space encoder–decoder is trained based on the training set. Then, for each input, its features in the latent space are scored and ranked based on how their modifications effect the classifier output. Finally, the features’ scoring vector is transformed into the original feature space, and is used to explain the classifier's outputs. By adopting this method, the PQD classifier results are more transparent and easier to interpret, when compared to recently developed XAI techniques.
KW - Convolutional neural networks
KW - Deep-learning
KW - Explainable artificial intelligence
KW - Latent space
KW - Power quality disturbances
KW - PQD
KW - Principal components analysis
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85146300050&partnerID=8YFLogxK
U2 - 10.1016/j.ijepes.2023.108949
DO - 10.1016/j.ijepes.2023.108949
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85146300050
SN - 0142-0615
VL - 148
JO - International Journal of Electrical Power and Energy Systems
JF - International Journal of Electrical Power and Energy Systems
M1 - 108949
ER -