13.6.3.2 Explainable Aritficial Intelligence

Chapter Contents (Back)
Explainable. Knowledge. Applied to CNNs especially:
See also Intrepretation, Explaination, Understanding of Convolutional Neural Networks.

Wellman, M.P., Henrion, M.,
Explaining 'explaining away',
PAMI(15), No. 3, March 1993, pp. 287-292.
IEEE DOI 0401
BibRef

Montavon, G.[Grégoire], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Samek, W.[Wojciech], Müller, K.R.[Klaus-Robert],
Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition,
PR(65), No. 1, 2017, pp. 211-222.
Elsevier DOI 1702
Award, Pattern Recognition. Deep neural networks BibRef

Lapuschkin, S., Binder, A., Montavon, G.[Grégoire], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks,
CVPR16(2912-2920)
IEEE DOI 1612
BibRef

Jung, A., Nardelli, P.H.J.,
An Information-Theoretic Approach to Personalized Explainable Machine Learning,
SPLetters(27), 2020, pp. 825-829.
IEEE DOI 2006
Predictive models, Data models, Probabilistic logic, Machine learning, Decision making, Linear regression, decision support systems BibRef

Muńoz-Romero, S.[Sergio], Gorostiaga, A.[Arantza], Soguero-Ruiz, C.[Cristina], Mora-Jiménez, I.[Inmaculada], Rojo-Álvarez, J.L.[José Luis],
Informative variable identifier: Expanding interpretability in feature selection,
PR(98), 2020, pp. 107077.
Elsevier DOI 1911
Feature selection, Interpretability, Explainable machine learning, Resampling, Classification BibRef

Kauffmann, J.[Jacob], Müller, K.R.[Klaus-Robert], Montavon, G.[Grégoire],
Towards explaining anomalies: A deep Taylor decomposition of one-class models,
PR(101), 2020, pp. 107198.
Elsevier DOI 2003
Outlier detection, Explainable machine learning, Deep Taylor decomposition, Kernel machines, Unsupervised learning BibRef

Yeom, S.K.[Seul-Ki], Seegerer, P.[Philipp], Lapuschkin, S.[Sebastian], Binder, A.[Alexander], Wiedemann, S.[Simon], Müller, K.R.[Klaus-Robert], Samek, W.[Wojciech],
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning,
PR(115), 2021, pp. 107899.
Elsevier DOI 2104
Pruning, Layer-wise relevance propagation (LRP), Convolutional neural network (CNN), Interpretation of models, Explainable AI (XAI) BibRef

Pierrard, R.[Régis], Poli, J.P.[Jean-Philippe], Hudelot, C.[Céline],
Spatial relation learning for explainable image classification and annotation in critical applications,
AI(292), 2021, pp. 103434.
Elsevier DOI 2102
Explainable artificial intelligence, Relation learning, Fuzzy logic BibRef

Langer, M.[Markus], Oster, D.[Daniel], Speith, T.[Timo], Hermanns, H.[Holger], Kästner, L.[Lena], Schmidt, E.[Eva], Sesing, A.[Andreas], Baum, K.[Kevin],
What do we want from Explainable Artificial Intelligence (XAI)?: A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research,
AI(296), 2021, pp. 103473.
Elsevier DOI 2106
Explainable Artificial Intelligence, Explainability, Interpretability, Explanations, Understanding, Human-Computer Interaction BibRef

Rio-Torto, I.[Isabel], Fernandes, K.[Kelwin], Teixeira, L.F.[Luís F.],
Understanding the decisions of CNNs: An in-model approach,
PRL(133), 2020, pp. 373-380.
Elsevier DOI 2005
Explainable AI, Explainability, Interpretability, Deep Llearning, Convolutional Nneural Nnetworks BibRef

Mokoena, T.[Tshepiso], Celik, T.[Turgay], Marivate, V.[Vukosi],
Why is this an anomaly? Explaining anomalies using sequential explanations,
PR(121), 2022, pp. 108227.
Elsevier DOI 2109
Outlier explanation, Sequential feature explanation, Sequential explanation, Anomaly validation, Explainable AI BibRef

Anjomshoae, S.[Sule], Omeiza, D.[Daniel], Jiang, L.[Lili],
Context-based image explanations for deep neural networks,
IVC(116), 2021, pp. 104310.
Elsevier DOI 2112
DNNs, Explainable AI, Contextual importance, Visual explanations BibRef

Sattarzadeh, S.[Sam], Sudhakar, M.[Mahesh], Plataniotis, K.N.[Konstantinos N.],
SVEA: A Small-scale Benchmark for Validating the Usability of Post-hoc Explainable AI Solutions in Image and Signal Recognition,
HTCV21(4141-4150)
IEEE DOI 2112
Performance evaluation, Visualization, Image recognition, Correlation, Machine learning, Benchmark testing BibRef

Teneggi, J.[Jacopo], Luster, A.[Alexandre], Sulam, J.[Jeremias],
Fast Hierarchical Games for Image Explanations,
PAMI(45), No. 4, April 2023, pp. 4494-4503.
IEEE DOI 2303
Games, Computational modeling, Neural networks, Tumors, Task analysis, Supervised learning, Standards, image explanations BibRef

Chattopadhyay, A.[Aditya], Slocum, S.[Stewart], Haeffele, B.D.[Benjamin D.], Vidal, R.[René], Geman, D.[Donald],
Interpretable by Design: Learning Predictors by Composing Interpretable Queries,
PAMI(45), No. 6, June 2023, pp. 7430-7443.
IEEE DOI 2305
Birds, Task analysis, Predictive models, Image color analysis, Computational modeling, Vegetation, Shape, Explainable AI, information theory BibRef

Bandyapadhyay, S.[Sayan], Fomin, F.V.[Fedor V.], Golovach, P.A.[Petr A.], Lochet, W.[William], Purohit, N.[Nidhi], Simonov, K.[Kirill],
How to find a good explanation for clustering?,
AI(322), 2023, pp. 103948.
Elsevier DOI 2308
Explainable clustering, Clustering with outliers, Multivariate analysis BibRef

Chen, H.F.[Hai-Fei], Yang, L.P.[Li-Ping], Wu, Q.S.[Qiu-Sheng],
Enhancing Land Cover Mapping and Monitoring: An Interactive and Explainable Machine Learning Approach Using Google Earth Engine,
RS(15), No. 18, 2023, pp. 4585.
DOI Link 2310
BibRef

Jiao, L.M.[Lian-Meng], Yang, H.Y.[Hao-Yu], Wang, F.[Feng], Liu, Z.G.[Zhun-Ga], Pan, Q.[Quan],
DTEC: Decision tree-based evidential clustering for interpretable partition of uncertain data,
PR(144), 2023, pp. 109846.
Elsevier DOI 2310
Evidential clustering, Interpretable clustering, Unsupervised decision tree, Belief function theory BibRef

Roussel, C.[Cédric], Böhm, K.[Klaus],
Geospatial XAI: A Review,
IJGI(12), No. 9, 2023, pp. 355.
DOI Link 2310
BibRef

Patricio, C.[Cristiano], Neves, J.C.[Joao C.], Teixeira, L.F.[Luis F.],
Explainable Deep Learning Methods in Medical Image Classification: A Survey,
Surveys(56), No. 4, October 2023, pp. xx-yy.
DOI Link 2312
Survey, Explainable AI. Explainable AI, interpretability, explainability, deep learning, medical image analysis BibRef

Delaney, E.[Eoin], Pakrashi, A.[Arjun], Greene, D.[Derek], Keane, M.T.[Mark T.],
Counterfactual explanations for misclassified images: How human and machine explanations differ,
AI(324), 2023, pp. 103995.
Elsevier DOI 2312
XAI, Counterfactual explanation, User testing BibRef

Posada-Moreno, A.F.[Andrés Felipe], Surya, N.[Nikita], Trimpe, S.[Sebastian],
ECLAD: Extracting Concepts with Local Aggregated Descriptors,
PR(147), 2024, pp. 110146.
Elsevier DOI 2312
Concept extraction, Explainable artificial intelligence, Convolutional neural networks BibRef

Liu, P.[Peng], Wang, L.[Lizhe], Li, J.[Jun],
Unlocking the Potential of Explainable Artificial Intelligence in Remote Sensing Big Data,
RS(15), No. 23, 2023, pp. 5448.
DOI Link 2312
BibRef

Yang, Y.Q.[Ya-Qi], Zhao, Y.[Yang], Cheng, Y.[Yuan],
PRIME: Posterior Reconstruction of the Input for Model Explanations,
PRL(176), 2023, pp. 202-208.
Elsevier DOI 2312
Machine learning, Statistical inference, Classification, Model explainability BibRef

Wang, Z.[Zhuo], Zhang, W.[Wei], Liu, N.[Ning], Wang, J.Y.[Jian-Yong],
Learning Interpretable Rules for Scalable Data Representation and Classification,
PAMI(46), No. 2, February 2024, pp. 1121-1133.
IEEE DOI 2401
Interpretable classification, representation learning, rule-based model, scalability BibRef

Rong, Y.[Yao], Leemann, T.[Tobias], Nguyen, T.T.[Thai-Trang], Fiedler, L.[Lisa], Qian, P.Z.[Pei-Zhu], Unhelkar, V.[Vaibhav], Seidel, T.[Tina], Kasneci, G.[Gjergji], Kasneci, E.[Enkelejda],
Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations,
PAMI(46), No. 4, April 2024, pp. 2104-2122.
IEEE DOI 2403
Artificial intelligence, Task analysis, Human computer interaction, Surveys, Bibliographies, Usability, human-AI interaction BibRef


Wang, C.[Chong], Liu, Y.[Yuyuan], Chen, Y.H.[Yuan-Hong], Liu, F.B.[Feng-Bei], Tian, Y.[Yu], McCarthy, D.[Davis], Frazer, H.[Helen], Carneiro, G.[Gustavo],
Learning Support and Trivial Prototypes for Interpretable Image Classification,
ICCV23(2062-2072)
IEEE DOI 2401
BibRef

Santos, F.A.O.[Flávio Arthur Oliveira], Zanchettin, C.[Cleber],
Exploring Image Classification Robustness and Interpretability with Right for the Right Reasons Data Augmentation,
LXCV-ICCV23(4149-4158)
IEEE DOI 2401
BibRef

Zhang, Y.F.[Yi-Fei], Gu, S.[Siyi], Gao, Y.Y.[Yu-Yang], Pan, B.[Bo], Yang, X.F.[Xiao-Feng], Zhao, L.[Liang],
MAGI: Multi-Annotated Explanation-Guided Learning,
ICCV23(1977-1987)
IEEE DOI 2401
BibRef

Fan, L.[Lei], Liu, B.[Bo], Li, H.X.[Hao-Xiang], Wu, Y.[Ying], Hua, G.[Gang],
Flexible Visual Recognition by Evidential Modeling of Confusion and Ignorance,
ICCV23(1338-1347)
IEEE DOI 2401
Address confidence of results and multiple options. BibRef

Englebert, A.[Alexandre], Stassin, S.[Sédrick], Nanfack, G.[Géraldin], Mahmoudi, S.A.[Sidi Ahmed], Siebert, X.[Xavier], Cornu, O.[Olivier], de Vleeschouwer, C.[Christophe],
Explaining through Transformer Input Sampling,
NIVT23(806-815)
IEEE DOI Code:
WWW Link. 2401
BibRef

Masala, M.[Mihai], Cudlenco, N.[Nicolae], Rebedea, T.[Traian], Leordeanu, M.[Marius],
Explaining Vision and Language through Graphs of Events in Space and Time,
CLVL23(2818-2823)
IEEE DOI 2401
BibRef

Zelenka, C.[Claudius], Göhring, A.[Andrea], Kazempour, D.[Daniyal], Hünemörder, M.[Maximilian], Schmarje, L.[Lars], Kröger, P.[Peer],
A Simple and Explainable Method for Uncertainty Estimation using Attribute Prototype Networks,
Uncertainty23(4572-4581)
IEEE DOI 2401
BibRef

Hesse, R.[Robin], Schaub-Meyer, S.[Simone], Roth, S.[Stefan],
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods,
ICCV23(3958-3968)
IEEE DOI 2401
BibRef

Dai, B.[Bo], Wang, L.[Linge], Jia, B.X.[Bao-Xiong], Zhang, Z.[Zeyu], Zhu, S.C.[Song-Chun], Zhang, C.[Chi], Zhu, Y.X.[Yi-Xin],
X-VoE: Measuring eXplanatory Violation of Expectation in Physical Events,
ICCV23(3969-3979)
IEEE DOI 2401
BibRef

Roth, K.[Karsten], Kim, J.M.[Jae Myung], Koepke, A.S.[A. Sophia], Vinyals, O.[Oriol], Schmid, C.[Cordelia], Akata, Z.[Zeynep],
Waffling around for Performance: Visual Classification with Random Words and Broad Concepts,
ICCV23(15700-15711)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gerstenberger, M.[Michael], Wiegand, T.[Thomas], Eisert, P.[Peter], Bosse, S.[Sebastian],
But That's Not Why: Inference Adjustment by Interactive Prototype Revision,
CIARP23(I:123-132).
Springer DOI 2312
BibRef

Poppi, S.[Samuele], Bigazzi, R.[Roberto], Rawal, N.[Niyati], Cornia, M.[Marcella], Cascianelli, S.[Silvia], Baraldi, L.[Lorenzo], Cucchiara, R.[Rita],
Towards Explainable Navigation and Recounting,
CIAP23(I:171-183).
Springer DOI 2312
BibRef

Nicolaou, A.[Andria], Prentzas, N.[Nicoletta], Loizou, C.P.[Christos P.], Pantzaris, M.[Marios], Kakas, A.[Antonis], Pattichis, C.S.[Constantinos S.],
A Comparative Study of Explainable Ai models in the Assessment of Multiple Sclerosis,
CAIP23(II:140-148).
Springer DOI 2312
BibRef

Wang, Y.Y.[Yang Yang], Bunyak, F.[Filiz],
DFT-CAM: Discrete Fourier Transform Driven Class Activation Map,
ICIP23(500-504)
IEEE DOI 2312
BibRef

Joukovsky, B.[Boris], Sammani, F.[Fawaz], Deligiannis, N.[Nikos],
Model-Agnostic Visual Explanations via Approximate Bilinear Models,
ICIP23(1770-1774)
IEEE DOI 2312
BibRef

Wang, Y.F.[Yi-Fan], Deng, S.Y.[Si-Yuan], Yuan, K.H.[Kun-Hao], Schaefer, G.[Gerald], Liu, X.[Xiyao], Fang, H.[Hui],
A Novel Class Activation Map for Visual Explanations in Multi-Object Scenes,
ICIP23(2615-2619)
IEEE DOI 2312
BibRef

Moayeri, M.[Mazda], Rezaei, K.[Keivan], Sanjabi, M.[Maziar], Feizi, S.[Soheil],
Text2Concept: Concept Activation Vectors Directly from Text,
XAI4CV23(3744-3749)
IEEE DOI 2309
BibRef

Ramaswamy, V.V.[Vikram V.], Kim, S.S.Y.[Sunnie S. Y.], Fong, R.[Ruth], Russakovsky, O.[Olga],
Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability,
CVPR23(10932-10941)
IEEE DOI 2309
BibRef

Wang, H.[Hanjing], Joshi, D.[Dhiraj], Wang, S.Q.[Shi-Qiang], Ji, Q.[Qiang],
Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning,
CVPR23(12044-12053)
IEEE DOI 2309
BibRef

Zemni, M.[Mehdi], Chen, M.[Mickaël], Zablocki, É.[Éloi], Ben-Younes, H.[Hédi], Pérez, P.[Patrick], Cord, M.[Matthieu],
OCTET: Object-aware Counterfactual Explanations,
CVPR23(15062-15071)
IEEE DOI 2309
BibRef

Jeanneret, G.[Guillaume], Simon, L.[Loďc], Jurie, F.[Frédéric],
Adversarial Counterfactual Visual Explanations,
CVPR23(16425-16435)
IEEE DOI 2309
BibRef

Yang, R.[Ruo], Wang, B.H.[Bing-Hui], Bilgic, M.[Mustafa],
IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients,
CVPR23(23725-23734)
IEEE DOI 2309
BibRef

Fel, T.[Thomas], Ducoffe, M.[Melanie], Vigouroux, D.[David], Cadčne, R.[Rémi], Capelle, M.[Mikael], Nicodčme, C.[Claire], Serre, T.[Thomas],
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis,
CVPR23(16153-16163)
IEEE DOI 2309
BibRef

Arias-Duart, A.[Anna], Mariotti, E.[Ettore], Garcia-Gasulla, D.[Dario], Alonso-Moral, J.M.[Jose Maria],
A Confusion Matrix for Evaluating Feature Attribution Methods,
XAI4CV23(3709-3714)
IEEE DOI 2309
BibRef

Bordt, S.[Sebastian], Upadhyay, U.[Uddeshya], Akata, Z.[Zeynep], von Luxburg, U.[Ulrike],
The Manifold Hypothesis for Gradient-Based Explanations,
XAI4CV23(3697-3702)
IEEE DOI 2309
BibRef

Yang, Y.[Yue], Panagopoulou, A.[Artemis], Zhou, S.[Shenghao], Jin, D.[Daniel], Callison-Burch, C.[Chris], Yatskar, M.[Mark],
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification,
CVPR23(19187-19197)
IEEE DOI 2309
BibRef

Nauta, M.[Meike], Schlötterer, J.[Jörg], van Keulen, M.[Maurice], Seifert, C.[Christin],
PIP-Net: Patch-Based Intuitive Prototypes for Interpretable Image Classification,
CVPR23(2744-2753)
IEEE DOI 2309
BibRef

Fel, T.[Thomas], Picard, A.[Agustin], Bethune, L.[Louis], Boissin, T.[Thibaut], Vigouroux, D.[David], Colin, J.[Julien], Cadénc, R.[Rémi], Serre, T.[Thomas],
CRAFT: Concept Recursive Activation FacTorization for Explainability,
CVPR23(2711-2721)
IEEE DOI 2309
BibRef

Santhirasekaram, A.[Ainkaran], Kori, A.[Avinash], Winkler, M.[Mathias], Rockall, A.[Andrea], Toni, F.[Francesca], Glocker, B.[Ben],
Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification,
TAG-PRA23(561-570)
IEEE DOI 2309
BibRef

Zhao, Z.X.[Zi-Xiang], Zhang, J.S.[Jiang-She], Bai, H.[Haowen], Wang, Y.C.[Yi-Cheng], Cui, Y.K.[Yu-Kun], Deng, L.[Lilun], Sun, K.[Kai], Zhang, C.X.[Chun-Xia], Liu, J.[Junmin], Xu, S.[Shuang],
Deep Convolutional Sparse Coding Networks for Interpretable Image Fusion,
AML23(2369-2377)
IEEE DOI 2309
BibRef

Akash Guna, R.T., Benitez, R.[Raul], Sikha, O.K.,
Ante-Hoc Generation of Task-Agnostic Interpretation Maps,
XAI4CV23(3764-3769)
IEEE DOI 2309
BibRef

Shukla, P.[Pushkar], Bharati, S.[Sushil], Turk, M.[Matthew],
CAVLI - Using image associations to produce local concept-based explanations,
XAI4CV23(3750-3755)
IEEE DOI 2309
BibRef

Hasany, S.N.[Syed Nouman], Petitjean, C.[Caroline], Mériaudeau, F.[Fabrice],
Seg-XRes-CAM: Explaining Spatially Local Regions in Image Segmentation,
XAI4CV23(3733-3738)
IEEE DOI 2309
BibRef

Riva, M.[Mateus], Gori, P.[Pietro], Yger, F.[Florian], Bloch, I.[Isabelle],
Is the U-NET Directional-Relationship Aware?,
ICIP22(3391-3395)
IEEE DOI 2211
Training data, Cognition, Task analysis, Standards, XAI, structural information, directional relationships, U-Net BibRef

Teney, D.[Damien], Peyrard, M.[Maxime], Abbasnejad, E.[Ehsan],
Predicting Is Not Understanding: Recognizing and Addressing Underspecification in Machine Learning,
ECCV22(XXIII:458-476).
Springer DOI 2211
BibRef

Sovatzidi, G.[Georgia], Vasilakakis, M.D.[Michael D.], Iakovidis, D.K.[Dimitris K.],
Automatic Fuzzy Graph Construction For Interpretable Image Classification,
ICIP22(3743-3747)
IEEE DOI 2211
Image edge detection, Semantics, Machine learning, Predictive models, Feature extraction, Interpretability BibRef

Chari, P.[Pradyumna], Ba, Y.H.[Yun-Hao], Athreya, S.[Shreeram], Kadambi, A.[Achuta],
MIME: Minority Inclusion for Majority Group Enhancement of AI Performance,
ECCV22(XIII:326-343).
Springer DOI 2211

WWW Link. BibRef

Deng, A.[Ailin], Li, S.[Shen], Xiong, M.[Miao], Chen, Z.[Zhirui], Hooi, B.[Bryan],
Trust, but Verify: Using Self-supervised Probing to Improve Trustworthiness,
ECCV22(XIII:361-377).
Springer DOI 2211
BibRef

Rymarczyk, D.[Dawid], Struski, L.[Lukasz], Górszczak, M.[Michal], Lewandowska, K.[Koryna], Tabor, J.[Jacek], Zielinski, B.[Bartosz],
Interpretable Image Classification with Differentiable Prototypes Assignment,
ECCV22(XII:351-368).
Springer DOI 2211
BibRef

Vandenhende, S.[Simon], Mahajan, D.[Dhruv], Radenovic, F.[Filip], Ghadiyaram, D.[Deepti],
Making Heads or Tails: Towards Semantically Consistent Visual Counterfactuals,
ECCV22(XII:261-279).
Springer DOI 2211

WWW Link. BibRef

Kim, S.S.Y.[Sunnie S. Y.], Meister, N.[Nicole], Ramaswamy, V.V.[Vikram V.], Fong, R.[Ruth], Russakovsky, O.[Olga],
HIVE: Evaluating the Human Interpretability of Visual Explanations,
ECCV22(XII:280-298).
Springer DOI 2211
BibRef

Jacob, P.[Paul], Zablocki, É.[Éloi], Ben-Younes, H.[Hédi], Chen, M.[Mickaël], Pérez, P.[Patrick], Cord, M.[Matthieu],
STEEX: Steering Counterfactual Explanations with Semantics,
ECCV22(XII:387-403).
Springer DOI 2211
BibRef

Machiraju, G.[Gautam], Plevritis, S.[Sylvia], Mallick, P.[Parag],
A Dataset Generation Framework for Evaluating Megapixel Image Classifiers and Their Explanations,
ECCV22(XII:422-442).
Springer DOI 2211
BibRef

Kolek, S.[Stefan], Nguyen, D.A.[Duc Anh], Levie, R.[Ron], Bruna, J.[Joan], Kutyniok, G.[Gitta],
Cartoon Explanations of Image Classifiers,
ECCV22(XII:443-458).
Springer DOI 2211
BibRef

Motzkus, F.[Franz], Weber, L.[Leander], Lapuschkin, S.[Sebastian],
Measurably Stronger Explanation Reliability Via Model Canonization,
ICIP22(516-520)
IEEE DOI 2211
Location awareness, Deep learning, Visualization, Current measurement, Neural networks, Network architecture BibRef

Yang, G.[Guang], Rao, A.[Arvind], Fernandez-Maloigne, C.[Christine], Calhoun, V.[Vince], Menegaz, G.[Gloria],
Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges,
ICIP22(1531-1535)
IEEE DOI 2211
Deep learning, Image segmentation, Special issues and sections, Biological system modeling, Signal processing, Data models, Biomedical Data BibRef

Paiss, R.[Roni], Chefer, H.[Hila], Wolf, L.B.[Lior B.],
No Token Left Behind: Explainability-Aided Image Classification and Generation,
ECCV22(XII:334-350).
Springer DOI 2211
BibRef

Khorram, S.[Saeed], Li, F.X.[Fu-Xin],
Cycle-Consistent Counterfactuals by Latent Transformations,
CVPR22(10193-10202)
IEEE DOI 2210
Try to find images similar to the query image that change the decision. Training, Measurement, Visualization, Image resolution, Machine vision, Computational modeling, Explainable computer vision BibRef

Hepburn, A.[Alexander], Santos-Rodriguez, R.[Raul],
Explainers in the Wild: Making Surrogate Explainers Robust to Distortions Through Perception,
ICIP21(3717-3721)
IEEE DOI 2201
Training, Measurement, Image processing, Predictive models, Distortion, Robustness, Explainability, surrogates, perception BibRef

Palacio, S.[Sebastian], Lucieri, A.[Adriano], Munir, M.[Mohsin], Ahmed, S.[Sheraz], Hees, J.[Jörn], Dengel, A.[Andreas],
XAI Handbook: Towards a Unified Framework for Explainable AI,
RPRMI21(3759-3768)
IEEE DOI 2112
Measurement, Terminology, Pipelines, Market research, Concrete BibRef

Vierling, A.[Axel], James, C.[Charu], Berns, K.[Karsten], Katsaouni, N.[Nikoletta],
Provable Translational Robustness for Object Detection With Convolutional Neural Networks,
ICIP21(694-698)
IEEE DOI 2201
Training, Support vector machines, Analytical models, Scattering, Object detection, Detectors, Feature extraction, Explainable AI BibRef

Ortega, A.[Alfonso], Fierrez, J.[Julian], Morales, A.[Aythami], Wang, Z.L.[Zi-Long], Ribeiro, T.[Tony],
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and Explainable Automatic Recruitment,
WACVW21(78-87) Explainable and Interpretable AI
IEEE DOI 2105
Training, Machine learning algorithms, Biometrics (access control), Resumes, Neural networks, Tools BibRef

Kwon, H.J.[Hyuk Jin], Koo, H.I.[Hyung Il], Cho, N.I.[Nam Ik],
Improving Explainability of Integrated Gradients with Guided Non-Linearity,
ICPR21(385-391)
IEEE DOI 2105
Measurement, Heating systems, Visualization, Gradient methods, Action potentials, Perturbation methods, Neurons BibRef

Fuhl, W.[Wolfgang], Rong, Y.[Yao], Motz, T.[Thomas], Scheidt, M.[Michael], Hartel, A.[Andreas], Koch, A.[Andreas], Kasneci, E.[Enkelejda],
Explainable Online Validation of Machine Learning Models for Practical Applications,
ICPR21(3304-3311)
IEEE DOI 2105
Machine learning algorithms, Microcontrollers, Memory management, Data acquisition, Training data, Transforms, Machine learning BibRef

Mänttäri, J.[Joonatan], Broomé, S.[Sofia], Folkesson, J.[John], Kjellström, H.[Hedvig],
Interpreting Video Features: A Comparison of 3d Convolutional Networks and Convolutional LSTM Networks,
ACCV20(V:411-426).
Springer DOI 2103

See also Interpretable Explanations of Black Boxes by Meaningful Perturbation. BibRef

Oussalah, M.[Mourad],
Ai Explainability. A Bridge Between Machine Vision and Natural Language Processing,
EDL-AI20(257-273).
Springer DOI 2103
BibRef

Petkovic, D., Alavi, A., Cai, D., Wong, M.,
Random Forest Model and Sample Explainer for Non-experts in Machine Learning: Two Case Studies,
EDL-AI20(62-75).
Springer DOI 2103
BibRef

Muddamsetty, S.M.[Satya M.], Jahromi, M.N.S.[Mohammad N. S.], Moeslund, T.B.[Thomas B.],
Expert Level Evaluations for Explainable Ai (XAI) Methods in the Medical Domain,
EDL-AI20(35-46).
Springer DOI 2103
BibRef

Muddamsetty, S.M., Mohammad, N.S.J., Moeslund, T.B.,
SIDU: Similarity Difference And Uniqueness Method for Explainable AI,
ICIP20(3269-3273)
IEEE DOI 2011
Visualization, Predictive models, Machine learning, Computational modeling, Measurement, Task analysis, Explainable AI, CNN BibRef

Sun, Y.C.[You-Cheng], Chockler, H.[Hana], Huang, X.W.[Xiao-Wei], Kroening, D.[Daniel],
Explaining Image Classifiers Using Statistical Fault Localization,
ECCV20(XXVIII:391-406).
Springer DOI 2011
BibRef

Choi, H., Som, A., Turaga, P.,
AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in Image Classification,
Diff-CVML20(3659-3666)
IEEE DOI 2008
Training, Task analysis, Feature extraction, Euclidean distance, Airplanes, Media BibRef

Parafita, Á., Vitriŕ, J.,
Explaining Visual Models by Causal Attribution,
VXAI19(4167-4175)
IEEE DOI 2004
data handling, feature extraction, intervened causal model, causal attribution, visual models, image generative models, learning BibRef

Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.,
Towards A Rigorous Evaluation Of XAI Methods On Time Series,
VXAI19(4197-4201)
IEEE DOI 2004
image processing, learning (artificial intelligence), text analysis, time series, SHAP, image domain, text-domain, explainable-ai-evaluation BibRef

Fong, R.C.[Ruth C.], Vedaldi, A.[Andrea],
Interpretable Explanations of Black Boxes by Meaningful Perturbation,
ICCV17(3449-3457)
IEEE DOI 1802
Explain the result of learning. image classification, learning (artificial intelligence), black box algorithm, black boxes, classifier decision, Visualization BibRef

Hossam, M.[Mahmoud], Le, T.[Trung], Zhao, H.[He], Phung, D.[Dinh],
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability,
ICPR21(8922-8928)
IEEE DOI 2105
Training, Deep learning, Computational modeling, Perturbation methods, Text categorization, Natural languages, Training data BibRef

Plummer, B.A.[Bryan A.], Vasileva, M.I.[Mariya I.], Petsiuk, V.[Vitali], Saenko, K.[Kate], Forsyth, D.A.[David A.],
Why Do These Match? Explaining the Behavior of Image Similarity Models,
ECCV20(XI:652-669).
Springer DOI 2011
BibRef

Cheng, X., Rao, Z., Chen, Y., Zhang, Q.,
Explaining Knowledge Distillation by Quantifying the Knowledge,
CVPR20(12922-12932)
IEEE DOI 2008
Visualization, Task analysis, Measurement, Knowledge engineering, Optimization, Entropy, Neural networks BibRef

Chen, Y.,
Nonparametric Learning Via Successive Subspace Modeling (SSM),
ICIP19(3031-3032)
IEEE DOI 1910
Machine Learning, Explainable Machine Learning, Nonparametric Learning, Subspace Modeling, Successive Subspace Modeling BibRef

Shi, J.X.[Jia-Xin], Zhang, H.W.[Han-Wang], Li, J.Z.[Juan-Zi],
Explainable and Explicit Visual Reasoning Over Scene Graphs,
CVPR19(8368-8376).
IEEE DOI 2002
BibRef

Chapter on Matching and Recognition Using Volumes, High Level Vision Techniques, Invariants continues in
Dataset Distillation, Dataset Summary, Dataset Quantization .


Last update:Mar 16, 2024 at 20:36:19