. Scientific Frontline: How Artificial Intelligence can explain its decisions

Friday, September 2, 2022

How Artificial Intelligence can explain its decisions

They have brought together the seemingly incompatible inductive approach of machine learning with deductive logic: Stephanie Schörner, Axel Mosig and David Schuhmacher (from left).
Credit: RUB, Marquard

If an algorithm in a tissue sample makes up a tumor, it does not yet reveal how it came to this result. It is not very trustworthy. Bochum researchers are therefore taking a new approach.

Artificial intelligence (AI) can be trained to recognize whether a tissue image contains tumor. How she makes her decision has so far remained hidden. A team from the Research Center for Protein Diagnostics, or PRODI for short, at the Ruhr University Bochum is developing a new approach: with it, the decision of an AI can be explained and thus trustworthy. The researchers around Prof. Dr. Axel Mosig in the journal "Medical Image Analysis".

Bioinformatician Axel Mosig cooperated with Prof. Dr. Andrea Tannapfel, head of the Institute of Pathology, the oncologist Prof. Dr. Anke Reinacher-Schick from St. Josef Hospital of the Ruhr University as well as the biophysicist and PRODI founding director Prof. Dr. Klaus Gerwert. The group developed a neural network, i.e. an AI that can classify whether a tissue sample contains tumor or not. To do this, they fed the AI with many microscopic tissue images, some of which contained tumors, others were tumor-free.

"Neuronal networks are initially a black box: it is unclear which distinguishing features a network learns from the training data," explains Axel Mosig. Compared to human experts, they lack the ability to explain decisions. "Especially in medical applications, it is important that the AI can be explained and therefore trustworthy," added bioinformatician David Schuhmacher, who was involved in the study.

AI is based on falsifiable hypotheses

A neural network is first trained with many data sets in order to be able to distinguish tumor-containing from tumor-free tissue images (input from above in the graphic). Then it receives a new fabric image from an experiment (input from the left). The neural network uses inductive closing to create the classification in "fantile" or "tumor-free" for the present image. At the same time, it creates an activation card from the fabric image. The activation card emerged from the inductive learning process and initially has no relation to reality. This reference is made by the falsifiable hypothesis that areas with high activation correspond exactly to the tumor regions in the sample. This hypothesis can be checked with further experiments. The approach thus follows the deductive logic.
Credit: PRODI

The explainable AI of the Bochum team is therefore based on the only kind of meaningful statements that science knows: falsifiable hypotheses. If a hypothesis is wrong, this must be demonstrable by an experiment. Artificial intelligence usually follows the principle of inductive closing: from concrete observations, the training data, the AI creates a general model on which it evaluates all further observations.

The problem behind this was described by the philosopher David Hume 250 years ago and is easy to illustrate: If you were to observe so many white swans, you could never conclude from this data that all swans are white and that there are no black swans. Science therefore uses the so-called deductive logic. A general hypothesis is the starting point for this approach. For example, the hypothesis that all swans are white is falsified by observing a black swan.

Activation card shows where the tumor is recognized


The neural network derives an activation card (right) from the microscopic image of a tissue sample (left). A hypothesis establishes the relationship between the purely arithmetically determined intensity of the activation and the experimentally verifiable detection of tumor regions. 
Credit: PRODI


"At first glance, the inductive AI and the deductive scientific method appear almost incompatible," says physicist Stephanie Schörner, who was also involved in the study. But the researchers found a way. Your newly developed neuronal network not only provides a classification as to whether a tissue sample contains a tumor or is tumor-free. It also creates an activation card for the microscopic tissue image.

The activation card is based on a falsifiable hypothesis, namely that the activation derived from the neural network corresponds exactly to the tumor regions in the sample. This hypothesis can be checked with site-specific molecular methods.

"Thanks to the interdisciplinary structures at the PRODI, we have the best prerequisites to incorporate the hypothesis-based approach into the development of trustworthy biomarker AI in the future, for example to be able to distinguish certain therapy-relevant tumor subtypes," summarizes Axel Mosig.

Source/Credit: Ruhr University Bochum

ai090222_01

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles