![]() |
Multi-layered, so-called deep neural networks are highly complex constructs that are inspired by the structure of the human brain. However, one shortcoming remains: the inner workings and decisions of these models often defy explanation. Image Credit: Brian Penny / AI Generated |
Deep neural networks have achieved remarkable results across science and technology, but it remains largely unclear what makes them work so well. A new study sheds light on the inner workings of deep learning models that learn from relational datasets, such as those found in biological and social networks.
Graph Neural Networks (GNNs) are artificial neural networks designed to represent entities—such as individuals, molecules, or cities—and the interactions between them. These networks have practical applications in various domains; for example, they predict traffic flows in Google Maps and accelerate the discovery of new antibiotics within computational drug discovery pipelines.
GNNs are notably utilized by AlphaFold, an acclaimed AI system that addresses the complex issue of protein folding in biology. Despite these achievements, the foundational principles driving their success are poorly understood.
A recent study sheds light on how these AI algorithms extract knowledge from complex networks and identifies ways to enhance their performance in various applications.