![]() |
| With no insight into how Al algorithms work or what influences their results, the “black box” nature of AI technology raises important questions over trustworthiness. Illustration Credit: Gerd Altmann |
An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating AI interpretability methods, with the aim of deciphering the basis of AI reasoning and possible biases.
Researchers from the University of Geneva (UNIGE), the Geneva University Hospitals (HUG), and the National University of Singapore (NUS) have developed a novel method for evaluating the interpretability of artificial intelligence (AI) technologies, opening the door to greater transparency and trust in AI-driven diagnostic and predictive tools. The innovative approach sheds light on the opaque workings of so-called "black box" AI algorithms, helping users understand what influences the results produced by AI and whether the results can be trusted. This is especially important in situations that have significant impacts on the health and lives of people, such as using AI in medical applications. The research carries particular relevance in the context of the forthcoming European Union Artificial Intelligence Act which aims to regulate the development and use of AI within the EU. The findings have recently been published in the journal Nature Machine Intelligence.




.jpg)



