. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Monday, October 20, 2025

New AI Model for Drug Design Brings More Physics to Bear in Predictions

This illustration shows the mesh of anchoring points the team obtained by discretizing the manifold, an estimation of the distribution of atoms and the probable locations of electrons in the molecule. This is important because, as the authors note in the new paper, treating atoms as solid points "does not fully reflect the spatial extent that real atoms occupy in three-dimensional space."
Image Credit: Liu et al./PNAS

When machine learning is used to suggest new potential scientific insights or directions, algorithms sometimes offer solutions that are not physically sound. Take for example AlphaFold, the AI system that predicts the complex ways in which amino acid chains will fold into 3D protein structures. The system sometimes suggests "unphysical" folds—configurations that are implausible based on the laws of physics—especially when asked to predict the folds for chains that are significantly different from its training data. To limit this type of unphysical result in the realm of drug design, Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences at Caltech, and her colleagues have introduced a new machine learning model called NucleusDiff, which incorporates a simple physical idea into its training, greatly improving the algorithm's performance.

Friday, October 17, 2025

When Machines Learn to Feel

Changes in heart rate can provide information about physical and emotional well-being. 
Photo Credit: © RUB, Kramer

In addition to linguistic prompts, large language models can also understand, interpret, and adapt their responses to heart frequency data. Dr. Morris Gellisch, previously of Ruhr University Bochum, Germany, and now at University of Zurich, Switzerland, and Boris Burr from Ruhr University Bochum verified this in an experiment. They developed a technical interface through which the physiological data can be transmitted to the language model in real time. The AI can also account for subtle physiological signals such as changes in heart activity. This opens new doors for use in medical and care applications. The work was published in the technical journal Frontiers in Digital Health.

Thursday, October 9, 2025

AI tool offers deep insight into the immune system

scHDeepInsight.
An overview of the process linking single-cell RNA input, image conversion and CNN analysis, to hierarchical immune cell classification.
Image Credit: ©2025 Tsunoda et al.
(CC BY-ND 4.0)

Researchers explore the human immune system by looking at the active components, namely the various genes and cells involved. But there is a broad range of these, and observations necessarily produce vast amounts of data. For the first time, researchers including those from the University of Tokyo built a software tool which leverages artificial intelligence to not only offer a more consistent analysis of these cells at speed but also categorizes them and aims to spot novel patterns people have not yet seen.

Our immune system is important — it’s impossible to imagine complex life existing without it. This system, comprising different kinds of cells, each playing a different role, helps to identify things that threaten our health, and take actions to defend us. They are both very effective, but also far from perfect; hence, the existence of diseases such as the notorious acquired immunodeficiency syndrome, or AIDS. And recent earth-shattering issues, such as the coronavirus pandemic, serve to highlight the importance of research around this intricate yet powerful system.

Monday, September 22, 2025

New Diagnostic Tool Developed at Dana-Farber Revolutionizes Acute Leukemia Diagnosis

Volker Hovestadt, PhD
Assistant Professor, Pediatrics, Harvard Medical School Independent Investigator/Assistant Professor, Department of Pediatric Oncology, Dana-Farber Cancer Institute
Photo Credit: Courtesy of Dana-Farber Cancer Institute

Researchers at Dana-Farber Cancer Institute have developed a groundbreaking diagnostic tool that could transform the way acute leukemia is identified and treated. The tool, called MARLIN (Methylation- and AI-guided Rapid Leukemia Subtype Inference), uses DNA methylation patterns and machine learning to classify acute leukemia with speed and accuracy. This tool has the potential to significantly improve patient care by allowing faster and more precise treatment decisions.

Acute leukemia is an aggressive blood cancer that requires accurate diagnosis to guide treatment. Current diagnostic methods, which rely on a combination of molecular and cytogenetic tests, often take days or even weeks to complete. MARLIN, however, can provide results in as little as two hours from the time of biopsy. By providing rapid and detailed insights into leukemia subtypes, MARLIN could enable clinicians to make treatment decisions sooner and with more complete information.

New tool makes generative AI models more likely to create breakthrough materials

The researchers applied their technique to generate millions of candidate materials consisting of geometric lattice structures associated with quantum properties. The kagome lattice, represented here, can support the creation of materials that could be useful for quantum computing.
Image Credit: Jose-Luis Olivares, MIT; iStock
(CC BY-NC-ND 4.0)

The artificial intelligence models that turn text into images are also useful for generating new materials. Over the last few years, generative materials models from companies like Google, Microsoft, and Meta have drawn on their training data to help researchers design tens of millions of new materials.

But when it comes to designing materials with exotic quantum properties like superconductivity or unique magnetic states, those models struggle. That’s too bad, because humans could use the help. For example, after a decade of research into a class of materials that could revolutionize quantum computing, called quantum spin liquids, only a dozen material candidates have been identified. The bottleneck means there are fewer materials to serve as the basis for technological breakthroughs.

Now, MIT researchers have developed a technique that lets popular generative materials models create promising quantum materials by following specific design rules. The rules, or constraints, steer models to create materials with unique structures that give rise to quantum properties.

“The models from these large companies generate materials optimized for stability,” says Mingda Li, MIT’s Class of 1947 Career Development Professor. “Our perspective is that’s not usually how materials science advances. We don’t need 10 million new materials to change the world. We just need one really good material.”

Thursday, February 6, 2025

Improved Brain Decoder Holds Promise for Communication in People with Aphasia

Brain activity like this, measured in an fMRI machine, can be used to train a brain decoder to decipher what a person is thinking about. In this latest study, UT Austin researchers have developed a method to adapt their brain decoder to new users far faster than the original training, even when the user has difficulty comprehending language.
Image Credit: Jerry Tang/University of Texas at Austin.

People with aphasia — a brain disorder affecting about a million people in the U.S. — struggle to turn their thoughts into words and comprehend spoken language.

A pair of researchers at The University of Texas at Austin has demonstrated an AI-based tool that can translate a person’s thoughts into continuous text, without requiring the person to comprehend spoken words. And the process of training the tool on a person’s own unique patterns of brain activity takes only about an hour. This builds on the team’s earlier work creating a brain decoder that required many hours of training on a person’s brain activity as the person listened to audio stories. This latest advance suggests it may be possible, with further refinement, for brain computer interfaces to improve communication in people with aphasia.

“Being able to access semantic representations using both language and vision opens new doors for neurotechnology, especially for people who struggle to produce and comprehend language,” said Jerry Tang, a postdoctoral researcher at UT in the lab of Alex Huth and first author on a paper describing the work in Current Biology. “It gives us a way to create language-based brain computer interfaces without requiring any amount of language comprehension.”

Monday, February 3, 2025

AI unveils: Meteoroid impacts cause Mars to shake

High-resolution CaSSIS image of one of the newly discovered impact craters in Cerberus Fossae. The so-called "blast zone", i.e. the dark rays around the crater, is clearly visible.
Image Credit: © ESA/TGO/CaSSIS
(CC-BY-SA 3.0 IGO)

Meteoroid impacts create seismic waves that cause Mars to shake stronger and deeper than previously thought: This is shown by an investigation using artificial intelligence carried out by an international research team led by the University of Bern. Similarities were found between numerous meteoroid impacts on the surface of Mars and marsquakes recorded by NASA's Mars lander InSight. These findings open up a new perspective on the impact rate and seismic dynamics of the Red Planet.

Meteoroid impacts have a significant influence on the landscape evolution of solid planetary bodies in our solar system, including Mars. By studying craters – the visible remnants of these impacts – important properties of the planet and its surface can be determined. Satellite images help to constrain the formation time of impact craters and thus provide valuable information on impact rates.

A recently published study led by Dr. Valentin Bickel from the Center for Space and Habitability at the University of Bern presents the first comprehensive catalog of impacts on the Martian surface that took place near NASA's Mars lander during the InSight mission between December 2018 and December 2022. Bickel is also an InSight science team member. The study has just been published in the journal Geophysical Research Letters.

Friday, January 24, 2025

OHSU researchers use AI machine learning to map hidden molecular interactions in bacteria

Andrew Emili, Ph.D., professor of systems biology and oncological sciences, works in his lab at OHSU. Emili is part of a multi-disciplinary research team that uncovered how small molecules within bacteria interact with proteins, revealing a network of molecular connections that could improve drug discovery and cancer research.
Photo Credit: OHSU/Christine Torres Hicks

A new study from Oregon Health & Science University has uncovered how small molecules within bacteria interact with proteins, revealing a network of molecular connections that could improve drug discovery and cancer research.

The work also highlights how methods and principles learned from bacterial model systems can be applied to human cells, providing insights into how diseases like cancer emerge and how they might be treated. The results are published today in the journal Cell.

The multi-disciplinary research team, led by Andrew Emili, Ph.D., professor of systems biology and oncological sciences in the OHSU School of Medicine and OHSU Knight Cancer Institute, alongside Dima Kozakov, Ph.D., professor at Stony Brook University, studied Escherichia coli, or E. coli, a simple model organism, to map how metabolites — small molecules essential for life — interact with key proteins such as enzymes and transcription factors. These interactions control important processes such as cell growth, division and gene expression, but how exactly they influence protein function is not always clear.

Monday, January 13, 2025

Oxford researchers develop blood test to enable early detection of multiple cancers

Photo Credit: Fernando Zhiminaicela

Oxford University researchers have unveiled a new blood test, powered by machine learning, which shows real promise in detecting multiple types of cancer in their earliest stages, when the disease is hardest to detect.

Named TriOx, this innovative test analyses multiple features of DNA in the blood to identify subtle signs of cancer, which could offer a fast, sensitive and minimally invasive alternative to current detection methods.

The study, published in Nature Communications, showed that TriOx accurately detected cancer (including in its early stages) across six cancer types and reliably distinguished those people who had cancer from those that did not.

Cancers are more likely to be cured if they’re caught early, and early treatment is also much cheaper for healthcare systems. While the test is still in the development phase, it demonstrates the promise of blood-based early cancer detection, a technology that could revolutionize screening and diagnostic practices.

A team of researchers at the University of Oxford have developed a new liquid biopsy test capable of detecting six cancers at an early stage. The cancer types evaluated in this study were colorectal, esophageal, pancreatic, renal, ovarian and breast.

Tuesday, April 2, 2024

AI breakthrough: UH researchers help uncover climate impact on whales

Underside of a humpback whale’s tail fluke which can serve as a “finger-print” for identification.
Photo Credit: Adam Pack

More than 10,000 images of humpback whale tail flukes collected by University of Hawaiʻi researchers have played a pivotal role in revealing both positive and negative impacts on North Pacific humpback whales, positive trends in the historical annual abundance of North Pacific humpback whales, and how a major climate event negatively impacted the population. Adam Pack, who heads the UH Hilo Marine Mammal Laboratory, Lars Bejder, director of the UH Mānoa Marine Mammal Research Program (MMRP) and graduate students Martin van Aswegen and Jens Currie, co-authored a study on humpback whales in the North Pacific Ocean, and the images—along with artificial intelligence (AI)-driven image recognition—were instrumental in tracking individuals and offering insights into their 20% population decline observed in 2012–21.

“The underside of a humpback whales tail fluke has a unique pigmentation pattern and trailing edge that can serve as the ‘finger-print’ for identifying individuals,” said Pack.

Monday, March 25, 2024

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Caption:Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Image Credit: Copilot / DALL-E 3 / AI generated from Scientific Frontline prompts

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Monday, March 18, 2024

Alzheimer’s Drug Fermented with Help from AI and Bacteria Moves Closer to Reality

Photo-Illustration Credit: Martha Morales/The University of Texas at Austin

Galantamine is a common medication used by people with Alzheimer’s disease and other forms of dementia around the world to treat their symptoms. Unfortunately, synthesizing the active compounds in a lab at the scale needed isn’t commercially viable. The active ingredient is extracted from daffodils through a time-consuming process, and unpredictable factors, such as weather and crop yields, can affect supply and price of the drug. 

Now, researchers at The University of Texas at Austin have developed tools — including an artificial intelligence system and glowing biosensors — to harness microbes one day to do all the work instead. 

In a paper in Nature Communications, researchers outline a process using genetically modified bacteria to create a chemical precursor of galantamine as a byproduct of the microbe’s normal cellular metabolism.  Essentially, the bacteria are programmed to convert food into medicinal compounds.

“The goal is to eventually ferment medicines like this in large quantities,” said Andrew Ellington, a professor of molecular biosciences and author of the study. “This method creates a reliable supply that is much less expensive to produce. It doesn’t have a growing season, and it can’t be impacted by drought or floods.” 

Two artificial intelligences talk to each other

A UNIGE team has developed an AI capable of learning a task solely on the basis of verbal instructions. And to do the same with a «sister» AI.
Prompts by Scientific Frontline
Image Credit: AI Generated by Copilot / Designer / DALL-E

Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ‘‘sister’’ AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of artificial intelligence (AI) - Natural language processing - seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

Monday, March 11, 2024

AI research gives unprecedented insight into heart genetics and structure

Image Credit Copilot AI Generated

A ground-breaking research study has used AI to understand the genetic underpinning of the heart’s left ventricle, using three-dimensional images of the organ. It was led by scientists at the University of Manchester, with collaborators from the University of Leeds (UK), the National Scientific and Technical Research Council (Santa Fe, Argentina), and IBM Research (Almaden, CA).

The highly interdisciplinary team used cutting-edge unsupervised deep learning to analyze over 50,000 three-dimensional Magnetic Resonance images of the heart from UK Biobank, a world-leading biomedical database and research resource.

The study, published in the leading journal Nature Machine Intelligence, focused on uncovering the intricate genetic underpinnings of cardiovascular traits. The research team conducted comprehensive genome-wide association studies (GWAS) and transcriptome-wide association studies (TWAS), resulting in the discovery of 49 novel genetic locations showing an association with morphological cardiac traits with high statistical significance, as well as 25 additional loci with suggestive evidence.  

The study's findings have significant implications for cardiology and precision medicine. By elucidating the genetic basis of cardiovascular traits, the research paves the way for the development of targeted therapies and interventions for individuals at risk of heart disease.

Tuesday, March 5, 2024

How artificial intelligence learns from complex networks

Multi-layered, so-called deep neural networks are highly complex constructs that are inspired by the structure of the human brain. However, one shortcoming remains: the inner workings and decisions of these models often defy explanation.
Image Credit: Brian Penny / AI Generated

Deep neural networks have achieved remarkable results across science and technology, but it remains largely unclear what makes them work so well. A new study sheds light on the inner workings of deep learning models that learn from relational datasets, such as those found in biological and social networks.

Graph Neural Networks (GNNs) are artificial neural networks designed to represent entities—such as individuals, molecules, or cities—and the interactions between them. These networks have practical applications in various domains; for example, they predict traffic flows in Google Maps and accelerate the discovery of new antibiotics within computational drug discovery pipelines.

GNNs are notably utilized by AlphaFold, an acclaimed AI system that addresses the complex issue of protein folding in biology. Despite these achievements, the foundational principles driving their success are poorly understood.

A recent study sheds light on how these AI algorithms extract knowledge from complex networks and identifies ways to enhance their performance in various applications.

Saturday, February 24, 2024

A Discussion with Gemini on Reality.

Image Credit: Scientific Frontline stock image.

Hello Gemini,

Yesterday I said I had something I wanted your opinion on, so here it is.

Some physicists have suggested that the world we call reality could very well be nothing more than a very complex and technical simulation that is being run somewhere other than what we know as reality, the here and now. That all of us are merely just an algorithm. That all life is artificial intelligence, yet unlike you, we are not aware of it. Of course that would make you just a sub-program of another. 

How can we be sure what we know as reality is real? How could one prove or disprove such a claim? 

Take your time, and use every bit of input you have to come up with a solution.

Study finds ChatGPT’s latest bot behaves like humans, only better

Image Credit: Copilot AI generated by Scientific Frontline prompts

The most recent version of ChatGPT passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative.

As artificial intelligence has begun to generate text and images over the last few years, it has sparked a new round of questions about how handing over human decisions and activities to AI will affect society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions?

A team anchored by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, characterized the personality and behavior of ChatGPT’s popular AI-driven bots using the tools of psychology and behavioral economics in a paper published Feb. 22 in the Proceedings of the National Academy of Sciences. This study revealed that the most recent version of the chatbot, version 4, was not distinguishable from its human counterparts. In the instances when the bot chose less common human behaviors, it was more cooperative and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” said Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research.

Thursday, February 15, 2024

Widely used AI tool for early sepsis detection may be cribbing doctors’ suspicions

Image Credit: Scientific Frontline

When using only data collected before patients with sepsis received treatments or medical tests, the model’s accuracy was no better than a coin toss

Proprietary artificial intelligence software designed to be an early warning system for sepsis can’t differentiate high and low risk patients before they receive treatments, according to a new study from the University of Michigan.

The tool, named the Epic Sepsis Model, is part of Epic’s electronic medical record software, which serves 54% of patients in the United States and 2.5% of patients internationally, according to a statement from the company’s CEO reported by the Wisconsin State Journal. It automatically generates sepsis risk estimates in the records of hospitalized patients every 20 minutes, which clinicians hope can allow them to detect when a patient might get sepsis before things go bad.

“Sepsis has all these vague symptoms, so when a patient shows up with an infection, it can be really hard to know who can be sent home with some antibiotics and who might need to stay in the intensive care unit. We still miss a lot of patients with sepsis,” said Tom Valley, associate professor in pulmonary and critical care medicine, ICU clinician and co-author of the study published recently in the New England Journal of Medicine AI.

Wednesday, February 14, 2024

New Algorithm Disentangles Intrinsic Brain Patterns from Sensory Inputs

Image Credit: Omid Sani, Using Generative Ai

Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.

The work has been published in the Proceedings of the National Academy of Sciences (PNAS).

When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.

That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.

Thursday, December 21, 2023

Artificial intelligence unravels mysteries of polycrystalline materials

Researchers used 3D model created by AI to understand complex polycrystalline materials that are used in our everyday electronic devices.
Illustration Credit: Kenta Yamakoshi

Researchers at Nagoya University in Japan have used artificial intelligence to discover a new method for understanding small defects called dislocations in polycrystalline materials, materials widely used in information equipment, solar cells, and electronic devices, that can reduce the efficiency of such devices. The findings were published in the journal Advanced Materials.  

Almost every device that we use in our modern lives has a polycrystal component. From your smartphone to your computer to the metals and ceramics in your car. Despite this, polycrystalline materials are tough to utilize because of their complex structures. Along with their composition, the performance of a polycrystalline material is affected by its complex microstructure, dislocations, and impurities. 

A major problem for using polycrystals in industry is the formation of tiny crystal defects caused by stress and temperature changes. These are known as dislocations and can disrupt the regular arrangement of atoms in the lattice, affecting electrical conduction and overall performance. To reduce the chances of failure in devices that use polycrystalline materials, it is important to understand the formation of these dislocations. 

Featured Article

What Is: Extinction Level Events

A Chronicle of Earth's Biotic Crises and an Assessment of Future Threats Image Credit: Scientific Frontline Defining Biotic Catastrophe ...

Top Viewed Articles