Graph Neural Networks (GNNs) and Graph Convolutional Networks (GCNs) are powerful Artificial Intelligence (AI) models designed to process graph-structured data efficiently. However, their decision-making processes are often difficult to interpret, functioning as ``black boxes''. This research project aims to enhance the inspectability and learning capabilities of GNNs and GCNs by integrating symbolic AI while improving the representation of the knowledge they learn. The research proposal focuses on integrating logic with Neural Networks to achieve two key objectives: enhancing inspectability (RQ1) and facilitating the acquisition and representation of symbolic knowledge from sub-symbolic data (RQ2).
A Roadmap from Weights to Wisdom: Inspecting and Extracting Knowledge from Graph Neural Networks
Artem Chernobrovkin
2024-01-01
Abstract
Graph Neural Networks (GNNs) and Graph Convolutional Networks (GCNs) are powerful Artificial Intelligence (AI) models designed to process graph-structured data efficiently. However, their decision-making processes are often difficult to interpret, functioning as ``black boxes''. This research project aims to enhance the inspectability and learning capabilities of GNNs and GCNs by integrating symbolic AI while improving the representation of the knowledge they learn. The research proposal focuses on integrating logic with Neural Networks to achieve two key objectives: enhancing inspectability (RQ1) and facilitating the acquisition and representation of symbolic knowledge from sub-symbolic data (RQ2).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


