Explainable and Transparent AI and Multi-Agent Systems | 3 Volumes


The anthology Explainable and Transparent AI and Multi-Agent Systems is a monumental compilation that encapsulates the forefront of research and philosophical inquiry into the realms of explainable artificial intelligence (XAI) and multi-agent systems (MAS). Spanning three significant international workshops—EXTRAAMAS 2021, held virtually due to the global pandemic; EXTRAAMAS 2022, also conducted virtually; and EXTRAAMAS 2023 in London—this collection, edited by esteemed scholars such as Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling, Andrea Omicini, Reyhan Aydogan, Rachele Carli, Giovanni Ciatto, and Yazan Mualla, presents a comprehensive exploration of the challenges and advancements in making AI systems more interpretable and transparent.

The work addresses the increasing complexity and opacity of modern AI systems, particularly those employing deep learning and neural networks. As these systems become integral to critical applications—ranging from healthcare diagnostics to autonomous vehicles—the necessity for them to be explainable to human users becomes not just a technical concern but a moral imperative. The anthology goes into the philosophical underpinnings of explainability, questioning what it means for an AI system to be “understood” by humans and how this understanding impacts trust, accountability, and ethical deployment.

The first volume, emanating from the 2021 workshop, presents a collection of 19 rigorously revised papers and a concise contribution, selected from an initial 32 submissions. These works are thematically organized into sections focusing on XAI and machine learning; vision, understanding, deployment, and evaluation in XAI; applications of XAI; logic and argumentation in XAI; and decentralized and heterogeneous XAI systems. The diversity of topics reflects the multifaceted nature of explainability, acknowledging that transparency must be considered at every layer of AI development—from algorithmic foundations to real-world applications.

One notable contribution in this volume investigates the mechanisms of selective attention and its modeling within AI systems. By examining how neural activity related to attention can be classified and interpreted using frequency features, the research bridges neuroscience and machine learning. It highlights the importance of understanding cognitive processes to enhance the interpretability of AI models that mimic or interact with human cognition. This interdisciplinary approach exemplifies the anthology’s commitment to addressing explainability not merely as a technical hurdle but as a complex problem interconnected with human perception and cognition.

The second volume, from the 2022 workshop, comprises 14 full papers reviewed and selected from 25 submissions. The papers delve into explainable machine learning, neuro-symbolic AI, explainable agents, metrics for XAI, and the interplay between AI and law. This volume intensifies the focus on developing methodologies that quantify and evaluate explainability, recognizing that without measurable standards, claims of transparency remain abstract. The inclusion of AI and law signifies an acknowledgment of the legal implications of AI deployment, where explainability becomes crucial for compliance, accountability, and ethical governance.

A significant study in this volume evaluates various importance estimators—techniques used to interpret deep learning classifiers—in the context of computed tomography (CT) imaging. The research underscores the challenges of applying deep learning to medical imaging, where interpretability is essential for clinical acceptance. By comparing different saliency map methods and their alignment with human expert annotations, the study reveals discrepancies between model-centric and human-centric evaluations of interpretability. This finding emphasizes the need for XAI methods that not only explain AI decisions in technical terms but also resonate with human understanding, particularly in high-stakes fields like medicine.

The third volume, originating from the 2023 workshop, features 15 full papers and a short paper selected from 26 submissions, focusing on explainable agents and multi-agent systems, explainable machine learning, and cross-domain applied XAI. This volume reflects the maturation of the field, presenting advanced techniques and exploring the application of XAI across different domains. The research highlights the evolving nature of explainability as AI systems become more embedded in diverse aspects of society.

An exemplary work in this volume addresses the mining and validation of belief-based explanations in agent systems. The study tackles the practical challenges of generating explanations when agents operate without pre-designed explanation modules, in environments with unreliable observations and non-deterministic plan executions. By leveraging historical data and agent execution logs, the researchers propose a data-driven approach to infer and validate the beliefs that underpin agent actions. This contribution is significant as it moves beyond theoretical constructs, providing tangible methods for enhancing transparency in complex, real-world agent systems where understanding the rationale behind actions is essential for trust and collaboration.

Throughout the anthology, a persistent theme is the tension between the complexity of AI systems and the human demand for understandable explanations. The editors and contributors collectively grapple with questions such as: How can AI systems that operate on data and computations beyond human cognitive capacity be made interpretable? What are the philosophical implications of delegating decision-making to machines that we cannot fully comprehend? How does explainability impact the ethical deployment of AI in society?

The works compiled engage with these questions by not only proposing technical solutions but also by reflecting on the cognitive and philosophical aspects of explanation. They recognize that explainability is not solely a property of the AI system but a relational concept involving the AI, the context, and the human user. This perspective acknowledges that different stakeholders may require different levels of explanation, and that transparency must be adaptable to these needs.

Moreover, the anthology does not shy away from the ethical and legal dimensions of XAI. By including discussions on AI and law, the collection highlights the necessity of explainability for ensuring that AI systems adhere to legal standards and ethical norms. It underscores the role of transparency in enabling accountability, where understanding the decision-making process of AI is crucial for addressing biases, errors, and unintended consequences.

The emphasis on multi-agent systems adds another layer of complexity, as interactions between autonomous agents introduce emergent behaviors that can be difficult to predict and explain. The anthology explores how explainability can be achieved in such decentralized and heterogeneous systems, proposing frameworks and methodologies that account for the dynamic and interactive nature of MAS.

In synthesizing the cutting-edge research presented over the three workshops, Explainable and Transparent AI and Multi-Agent Systems serves as both a technical resource and a philosophical exploration of one of the most pressing issues in contemporary AI. It acknowledges that as AI systems become more powerful and pervasive, the traditional black-box approach is no longer tenable. The anthology calls for a concerted effort to develop AI systems that are not only effective but also transparent and understandable, aligning technological advancement with human values and societal needs.

For researchers and practitioners in AI and related fields, this work provides a wealth of knowledge on the latest methodologies and applications of XAI. For philosophers and ethicists, it offers a rich ground for examining the implications of AI on human understanding, agency, and ethics. Ultimately, the anthology represents a significant step towards bridging the gap between complex AI systems and the human demand for transparency, fostering a future where AI is both powerful and aligned with human values.


DOWNLOAD: (3x .pdf)

Leave a comment