Graph Neural Networks (GNNs) have revolutionized numerous domains, from social network analysis and recommender systems to biochemistry, by leveraging the rich interplay between node attributes and their relationships. These models excel at capturing the complexities of graph-structured data, driving advancements in prediction accuracy. However, their black-box nature poses a critical challenge in high-stakes fields like healthcare and finance, where transparency, trust, and fairness are non-negotiable. Graph Counterfactual Explanations (GCEs) emerge as a compelling solution, shedding light on GNN decisions by exploring how subtle changes in the graph’s structure or node features could alter outcomes. Unlike traditional feature-based explanations, GCEs offer actionable insights, enabling users to identify tangible steps toward desired results. For instance, in healthcare, GCEs might illuminate how a patient could reduce disease risk, while in finance, they could highlight pathways to improving creditworthiness. Yet, realizing the full potential of GCEs is far from straightforward. Significant challenges remain, including: i) the lack of a unified definition and taxonomy for GCE methods, ii) the absence of standardized evaluation practices, which hampers fair comparisons across datasets, metrics, and oracles, iii) the need for counterfactuals that are not only plausible–staying true to the data's distribution–but also actionable, providing practical and meaningful guidance, and iv) the inherent complexity of graph data, where nodes and relationships are deeply interwoven and domain-specific, demanding nuanced and context-aware explanations. This thesis addresses these challenges by advancing the state of the art (SoA) through several pivotal innovations. First, we tackled challenges i) and ii) by rationalizing the field of study through a comprehensive survey. This work established a general definition of Graph Counterfactual Explanations (GCEs) and introduced an exhaustive taxonomy of existing approaches. Additionally, it presented a qualitative comparison of state-of-the-art GCE methods, complemented by an empirical analysis, while critically evaluating widely used datasets and metrics, highlighting their strengths and limitations. This foundational survey not only unified the fragmented and emerging field of GCE research but has also become a cornerstone reference for researchers, enabling clearer interpretation and comparison of methods within the broader XAI community. Second, we addressed ii) by introducing GRETEL, a versatile and extensible framework that standardizes and streamlines the development and evaluation of GCE methods. Offering a comprehensive toolkit–including datasets, oracles, explainers, and evaluation metrics–GRETEL has garnered interest not only from XAI researchers but also from Machine Learning (ML) practitioners seeking to incorporate explainability into their ML pipelines and evaluate methods best suited to their data and domains. By making GCE evaluations more accessible and consistent, GRETEL paves the way for more robust and reproducible research. Third, we addressed challenge iii) with a novel GAN-based generative GCE method that sets a milestone in leveraging generative machine learning to produce counterfactual graphs with plausible structures and features. This method goes beyond the capabilities of traditional GCE methods by enabling the modification of both graph structures and node features while generating realistic, context-sensitive explanations. It highlights the potential of generative techniques to expand the scope and applicability of counterfactual explanations. Finally, we tackled challenge iv) by proposing innovative meta-explainers that dynamically combine multiple GCE techniques through selection and aggregation strategies. By learning to identify the most suitable explanation method based on the characteristics of individual data instances, these meta-explainers deliver robust and adaptable counterfactuals, setting a new standard for performance across diverse datasets and domains. In conclusion, this thesis addresses critical gaps in the understanding and development of Graph Counterfactual Explanations (GCEs), advancing the field through a combination of theoretical insights, practical frameworks, and innovative methodologies. By offering a unifying perspective, delivering a comprehensive survey, and introducing tools like GRETEL alongside groundbreaking generative approaches and meta-explainers, this work resolves pressing challenges and establishes a new benchmark for future research. Together, these advancements enhance the trustworthiness and usability of AI models in high-stakes domains, empowering users with transparent and actionable insights. These contributions lay a robust foundation for further exploration and application of counterfactual explanations, driving progress across the broader landscape of Explainable AI and beyond. Moreover, the methodologies and tools presented here open new avenues for research, encouraging further innovation and expanding the reach of explainable AI into emerging domains.

Counterfactual Explainability in Graphs: Foundations, Generative Methods, and Ensemble Techniques / PRADO ROMERO, MARIO ALFONSO. - (2025 Mar 07).

Counterfactual Explainability in Graphs: Foundations, Generative Methods, and Ensemble Techniques

PRADO ROMERO, MARIO ALFONSO
2025-03-07

Abstract

Graph Neural Networks (GNNs) have revolutionized numerous domains, from social network analysis and recommender systems to biochemistry, by leveraging the rich interplay between node attributes and their relationships. These models excel at capturing the complexities of graph-structured data, driving advancements in prediction accuracy. However, their black-box nature poses a critical challenge in high-stakes fields like healthcare and finance, where transparency, trust, and fairness are non-negotiable. Graph Counterfactual Explanations (GCEs) emerge as a compelling solution, shedding light on GNN decisions by exploring how subtle changes in the graph’s structure or node features could alter outcomes. Unlike traditional feature-based explanations, GCEs offer actionable insights, enabling users to identify tangible steps toward desired results. For instance, in healthcare, GCEs might illuminate how a patient could reduce disease risk, while in finance, they could highlight pathways to improving creditworthiness. Yet, realizing the full potential of GCEs is far from straightforward. Significant challenges remain, including: i) the lack of a unified definition and taxonomy for GCE methods, ii) the absence of standardized evaluation practices, which hampers fair comparisons across datasets, metrics, and oracles, iii) the need for counterfactuals that are not only plausible–staying true to the data's distribution–but also actionable, providing practical and meaningful guidance, and iv) the inherent complexity of graph data, where nodes and relationships are deeply interwoven and domain-specific, demanding nuanced and context-aware explanations. This thesis addresses these challenges by advancing the state of the art (SoA) through several pivotal innovations. First, we tackled challenges i) and ii) by rationalizing the field of study through a comprehensive survey. This work established a general definition of Graph Counterfactual Explanations (GCEs) and introduced an exhaustive taxonomy of existing approaches. Additionally, it presented a qualitative comparison of state-of-the-art GCE methods, complemented by an empirical analysis, while critically evaluating widely used datasets and metrics, highlighting their strengths and limitations. This foundational survey not only unified the fragmented and emerging field of GCE research but has also become a cornerstone reference for researchers, enabling clearer interpretation and comparison of methods within the broader XAI community. Second, we addressed ii) by introducing GRETEL, a versatile and extensible framework that standardizes and streamlines the development and evaluation of GCE methods. Offering a comprehensive toolkit–including datasets, oracles, explainers, and evaluation metrics–GRETEL has garnered interest not only from XAI researchers but also from Machine Learning (ML) practitioners seeking to incorporate explainability into their ML pipelines and evaluate methods best suited to their data and domains. By making GCE evaluations more accessible and consistent, GRETEL paves the way for more robust and reproducible research. Third, we addressed challenge iii) with a novel GAN-based generative GCE method that sets a milestone in leveraging generative machine learning to produce counterfactual graphs with plausible structures and features. This method goes beyond the capabilities of traditional GCE methods by enabling the modification of both graph structures and node features while generating realistic, context-sensitive explanations. It highlights the potential of generative techniques to expand the scope and applicability of counterfactual explanations. Finally, we tackled challenge iv) by proposing innovative meta-explainers that dynamically combine multiple GCE techniques through selection and aggregation strategies. By learning to identify the most suitable explanation method based on the characteristics of individual data instances, these meta-explainers deliver robust and adaptable counterfactuals, setting a new standard for performance across diverse datasets and domains. In conclusion, this thesis addresses critical gaps in the understanding and development of Graph Counterfactual Explanations (GCEs), advancing the field through a combination of theoretical insights, practical frameworks, and innovative methodologies. By offering a unifying perspective, delivering a comprehensive survey, and introducing tools like GRETEL alongside groundbreaking generative approaches and meta-explainers, this work resolves pressing challenges and establishes a new benchmark for future research. Together, these advancements enhance the trustworthiness and usability of AI models in high-stakes domains, empowering users with transparent and actionable insights. These contributions lay a robust foundation for further exploration and application of counterfactual explanations, driving progress across the broader landscape of Explainable AI and beyond. Moreover, the methodologies and tools presented here open new avenues for research, encouraging further innovation and expanding the reach of explainable AI into emerging domains.
7-mar-2025
Explainable AI; Graph Neural Networks; Artificial Intelligence; Machine Learning; Counterfactual Reasoning
Counterfactual Explainability in Graphs: Foundations, Generative Methods, and Ensemble Techniques / PRADO ROMERO, MARIO ALFONSO. - (2025 Mar 07).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12571/34884
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact