Workshop overview
Deep learning (DL) research has made dramatic progress in recent years, achieving high performance on supervised learning tasks for numerous problem domains. Simultaneously, there remain well-known challenges such as the need for large amounts of labeled training data, solving synthesis problems with structured solutions (e.g., designs, plans, or schedules), and explainability. Case-based reasoning is a knowledge-based methodology for reasoning from prior episodes, with complementary capabilities--such as to solve problems with small data sets or those requiring structured solutions, and to generate concrete explanations--and limitations. AutoML concerns processes for automatically generating end-to-end-machine learning (e.g., DL) pipelines, and could use techniques that build pipelines from prior cases (of successful pipeline components). This workshop will bring together researchers interested in DL, CBR, and AutoML to identify new opportunities and beneficial strategies for integrating these approaches to address current challenges.Our goal for this workshop is to bring together members of the DL, CBR, and AutoML communities to identify new opportunities for leveraging the case-based reasoning methodology to advance deep learning and DL to advance CBR, to identify opportunities and challenges for leveraging CBR for AutoML, to examine related efforts from all three subareas, and to develop approaches for advancing such integrations. The workshop will include a substantial discussion component.
Potential subtopics include, but are not limited to:
- New theoretical approaches for integrating CBR and DL
- Approaches for using CBR to guide selection of problem-appropriate DL architectures and to guide composition of DL architectures
- Using the case-based cycle as a framework for combining DL components or integrating them with other technologies
- Adding solution adaptation processes to deep learning
- CBL-DL integrations for:
- Few-shot learning
- Generating structured solutions
- Reducing the sizes of trained networks
- Developing a CBR-inspired DL pipeline
- CBR support for AutoML
- Presentation of results on existing integrations
- Discussion on challenge problems from DL, CBR, or AutoML
For further discussion of some of these topics, see On Bringing Case-Based Reasoning Methodology to Deep Learning.
Agenda
Note: Times are listed in Montreal Time (UTC-4)
10:00-10.20 | Welcome | |||
---|---|---|---|---|
10:20-11:05 | Paper Session I Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation. Mark Keane, Eoin Kenny, Eoin Delaney, Mohammed Temraz, Derek Greene, Barry Smyth http://arxiv.org/abs/2104.14461 Applying the Case Difference Heuristic to Learn Adaptations from Deep Network Features. Xiaomeng Ye, Ziwei Zhao, David Leake, Xizi Wang, David Crandall https://arxiv.org/abs/2107.07095 |
|||
11:05-11:35 | Positions and discussion I: DL Advancing CBR: Opportunities and Challenges Kerstin Bach, NTNU, and Stewart Massie, Robert Gordon University | |||
11:35-12:00 | Break | |||
12:00-12:45 | Paper session IIInformed Machine Learning for Improved Similarity Assessment in Process-Oriented Case-Based Reasoning. Maximilian Hoffmann and Ralph Bergmann https://arxiv.org/abs/2106.15931 Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning. Alina J Barnett, Fides Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y Lo, Cynthia Rudin https://arxiv.org/abs/2107.05605 |
|||
12:45-13:15 | Positions and Discussion II: CBR Advancing DL: Opportunities and Challenges Leslie Smith, NRL, and Rosina Weber, Drexel University | |||
13:15-13:35 | Position and Discussion III: CBR Advancing AML: Opportunities and Challenges Odd Erik Gundersen, NTNU | |||
13:35-14:00 | Closing discussion: Next steps |
Accepted Papers
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation
Mark T Keane, Eoin M Kenny, Mohammed Temraz, Derek Greene, Barry Smyth
Recently, it has been proposed that fruitful synergies may exist between Deep Learning (DL) and Case Based Reasoning (CBR); that there are insights to be gained by applying CBR ideas to problems in DL (what could be called DeepCBR). In this paper, we report on a program of research that applies CBR solutions to the problem of Explainable AI (XAI) in the DL. We describe a series of twin-systems pairings of opaque DL models with transparent CBR models that allow the latter to explain the former using factual, counterfactual and semi-factual explanation strategies. This twinning shows that functional abstractions of DL (e.g., feature weights, feature importance and decision boundaries) can be used to drive these explanatory solutions. We also raise the prospect that this research also applies to the problem of Data Augmentation in DL, underscoring the fecundity of these DeepCBR ideas.
http://arxiv.org/abs/2104.14461Applying the Case Difference Heuristic to Learn Adaptations from Deep Network Features
Xiaomeng Ye, Ziwei Zhao, David Leake, Xizi Wang, David Crandall
The case difference heuristic (CDH) approach is a knowledge-light method for learning case adaptation knowledge from the case base of a case-based reasoning system. Given a pair of cases, the CDH approach attributes the difference in their solutions to the difference in the problems they solve, and generates adaptation rules to adjust solutions accordingly when a retrieved case and new query have similar problem differences. As an alternative to learning adaptation rules, several researchers have applied neural networks to learn to predict solution differences from problem differences. Previous work on such approaches has assumed that the feature set describing problems is predefined. This paper investigates a two-phase process combining deep learning for feature extraction and neural network based adaptation learning from extracted features. Its performance is demonstrated in a regression task on an image data: predicting age given the image of a face. Results show that the combined process can successfully learn adaptation knowledge applicable to nonsymbolic differences in cases. The CBR system achieves slightly lower performance overall than a baseline deep network regressor, but better performance than the baseline on novel queries.
https://arxiv.org/abs/2107.07095Informed Machine Learning for Improved Similarity Assessment in Process-Oriented Case-Based Reasoning
Maximilian Hoffmann, Ralph Bergmann
Currently, Deep Learning (DL) components within a Case-Based Reasoning (CBR) application often lack the comprehensive integration of available domain knowledge. The trend within machine learning towards so-called Informed machine learning can help to overcome this limitation. In this paper, we therefore investigate the potential of integrating domain knowledge into Graph Neural Networks (GNNs) that are used for similarity assessment between semantic graphs within process-oriented CBR applications. We integrate knowledge in two ways: First, a special data representation and processing method is used that encodes structural knowledge about the semantic annotations of each graph node and edge. Second, the message-passing component of the GNNs is constrained by knowledge on legal node mappings. The evaluation examines the quality and training time of the extended GNNs, compared to the stock models. The results show that both extensions are capable of providing better quality, shorter training times, or in some configurations both advantages at once.
https://arxiv.org/abs/2106.15931Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning
Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y. Lo, Cynthia Rudin
When we deploy machine learning models in high-stakes medical settings, we must ensure these models make accurate predictions that are consistent with known medical science. Inherently interpretable networks address this need by explaining the rationale behind each decision while maintaining equal or higher accuracy compared to black-box models. In this work, we present a novel interpretable neural network algorithm that uses case-based reasoning for mammography. Designed to aid a radiologist in their decisions, our network presents both a prediction of malignancy and an explanation of that prediction using known medical features. In order to yield helpful explanations, the network is designed to mimic the reasoning processes of a radiologist: our network first detects the clinically relevant semantic features of each image by comparing each new image with a learned set of prototypical image parts from the training images, then uses those clinical features to predict malignancy. Compared to other methods, our model detects clinical features (mass margins) with equal or higher accuracy, provides a more detailed explanation of its prediction, and is better able to differentiate the classification-relevant parts of the image.
https://arxiv.org/abs/2107.05605Organizers
David Aha
Naval Research Laboratory
David Crandall
Indiana University
David Leake
Indiana University
Organizing Committee
Yue Chen
Indiana University
Xiaomeng Ye
Indiana University
Xizi Wang
Indiana University
Program Committee
Klaus-Dieter Althoff
University of Hildesheim
Kerstin Bach
Norwegian University of Science and Techology
Chaofan Chen
University of Maine
Ralph Bergmann
University of Trier
Stelios Kapetanakis
University of Brighton
Sadiq Sani
British Telecommunications PLC
Swaroop Vattam
MIT Lincoln Laboratory
Rosina Weber
Drexel University
Nirmalie Wiratunga
Robert Gordon University
Mark Keane
University College Dublin
Kyle Martin
Robert Gordon University
Michael Floyd
Knexus Research
Daniel Lopez-Sanchez
University of Salamanca
Important dates
- Abstract Submission Deadline: May 7, 2021
- Paper Submission Deadline (long, short and position papers): May 11, 2021 (Extended)
- Notification Date: May 25, 2021
- Workshop Date: Friday, Aug 20, 2021