. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, January 27, 2026

Streaks on Mercury show: Mercury is not a "dead planet"

Image of the streaks or ‘lineae’ on the slopes of a crater wall on Mercury and the bright hollows from which the streaks originate. The image was taken by MESSENGER on April 10, 2014.
Image Credit: © NASA/JHUAPL/Carnegie Institution of Washington

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: A systematic analysis has identified approximately 400 bright slope streaks, or "lineae," on Mercury, indicating the planet is currently geologically active through the outgassing of subsurface volatiles.
  • Methodology: Researchers employed a deep learning algorithm to automatically screen and analyze over 100,000 high-resolution images captured by NASA's MESSENGER spacecraft during its 2011–2015 orbital mission.
  • Key Data: The study produced the first comprehensive census of roughly 400 streaks—compared to only a handful previously known—revealing a distinct accumulation on the sun-facing slopes of young impact craters.
  • Significance: These findings overturn the prevailing assumption that Mercury is a "dead" and static world, suggesting a continuous, solar-driven release of elements like sulfur into space.
  • Future Application: This inventory will serve as a baseline for the ESA/JAXA BepiColombo mission to re-image these regions, allowing scientists to detect new streak formation and quantify the planet's volatile budget.
  • Branch of Science: Planetary Geology and Remote Sensing.
  • Additional Detail: The formation of these streaks is attributed to solar radiation mobilizing volatiles through crack networks created by impact events, often originating from bright, shallow depressions known as hollows.

Monday, January 26, 2026

AI-powered model advances treatment planning for patients with spinal metastasis

Image Credit: Scientific Frontline / AI generated (Gemini)

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Researchers developed a machine learning-based prognostic scoring system for spinal metastasis that accurately predicts one-year survival using modern clinical data.
  • Methodology: The team employed Least Absolute Shrinkage and Selection Operator (LASSO) logistic regression to analyze prospective data from 401 patients undergoing surgery at 35 medical institutions.
  • Key Data: The model demonstrated high accuracy with an AUROC of 0.762, distinguishing one-year survival rates between low-risk (82.2%), intermediate-risk (67.2%), and high-risk (34.2%) groups.
  • Significance: This tool resolves the limitations of traditional scoring systems based on obsolete 1990s data by integrating outcomes from contemporary treatments like molecularly targeted therapies and immunotherapies.
  • Future Application: Clinical deployment to guide surgical versus palliative care decisions, with ongoing plans to validate the model's efficacy using international datasets.
  • Branch of Science: Orthopedics, Oncology, and Data Science
  • Additional Detail: Prognostic stratification relies on five non-invasive variables: vitality index, age, performance status, bone metastasis presence, and preoperative opioid usage.

Artificial intelligence makes quantum field theories computable

Quantum field theory on the computer
If you make the calculation grid increasingly finer, what happens to the result?
Image Credit: © TU Wien  

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Researchers successfully utilized Artificial Intelligence to solve a long-standing problem in particle physics: calculating Quantum Field Theories (QFT) on a lattice with optimal precision.
  • Methodology: The team employed a specialized neural network architecture called "Lattice Gauge Equivariant Convolutional Neural Networks" (L-CNNs) to learn a "Fixed Point Action." This mathematical formulation allows the physics of the continuum to be mapped perfectly onto a coarse discrete grid, eliminating typical discretization errors.
  • Key Data: The AI-driven approach significantly overcomes the "Critical Slowing Down" phenomenon, a major computational bottleneck where the cost of simulation increases dramatically as the lattice is refined. The new method allows simulations on coarse lattices to yield results as precise as those from extremely fine lattices, making previously impossible calculations feasible.
  • Significance: This breakthrough enables the reliable and efficient simulation of complex quantum systems, such as the quark-gluon plasma (the state of the universe shortly after the Big Bang) or the internal structure of atomic nuclei, which were previously too computationally expensive for even the world's most powerful supercomputers.
  • Future Application: The technique will be applied to gain deeper insights into the early universe, simulate experiments in particle colliders (like the Large Hadron Collider) with higher fidelity, and potentially explore new physics beyond the Standard Model by allowing for more rigorous error quantification.
  • Branch of Science: Theoretical Particle Physics, Lattice Field Theory, and Artificial Intelligence (Machine Learning).
  • Additional Detail: By using L-CNNs, the researchers ensured that the neural networks respect the fundamental symmetries of the gauge theories (gauge invariance), which is critical for the physical validity of the simulations.

Saturday, January 24, 2026

AI generates short DNA sequences that show promise for gene therapies

Scientists are training AI models to recognize and write pieces of human DNA that control gene expression, in hopes that one day these synthetic sequences can improve genetic medicine.
Image Credit: Scientific Frontline / AI generated (Gemini)

Scientific Frontline: Extended "At a Glance" Summary

  • The Core Concept: A generative AI model designed to create synthetic DNA sequences, specifically cis-regulatory elements (CREs), that can precisely control gene activity within targeted cell types.
  • Key Distinction/Mechanism: Unlike traditional methods that modify existing DNA by removing or inserting segments, this model generates entirely new, functional sequences from scratch. It adapts diffusion model technology—similar to that used in image generators like DALL-E—to analyze chromatin accessibility data and write novel genetic "instructions."
  • Origin/History: Developed by scientists at the Broad Institute and Mass General Brigham; the study was published in Nature Genetics in December 2025, with further details released in January 2026.
  • Major Frameworks/Components:
    • Diffusion Models: The generative AI architecture used to create the sequences.
    • Cis-Regulatory Elements (CREs): The short DNA segments targeted for generation, responsible for tuning gene expression.
    • Chromatin Accessibility Data: The training dataset used to teach the model which regulatory elements are active in specific cells.
    • AXIN2: A protective gene used as a proof-of-concept target to demonstrate the model's ability to reactivate suppressed genes in leukemia cells.
  • Branch of Science:
    • Computational Biology / Bioinformatics
    • Artificial Intelligence (Generative AI)
    • Genetics and Genomics
  • Future Application: The technology aims to enhance gene therapies by creating synthetic regulatory elements that ensure treatments are active only in the correct tissues. Future uses could involve pairing these sequences with delivery vectors like adeno-associated viruses (AAVs) or genome editors.
  • Why It Matters: This advancement moves beyond merely editing the genome to "writing" it, enabling the design of highly specific, potent genetic switches. This could lead to more effective treatments for complex diseases like cancer by ensuring therapeutic genes are turned on more effectively than their natural counterparts would allow.

Thursday, January 22, 2026

An AI to predict the risk of cancer metastases

Group of human colon cancer cells with invasive behavior. Cell nuclei are in yellow and cell bodies in red. The finger-like protrusions of invasive cells are on the upper right region.
Image Credit: © Ariel Ruiz i Altaba, UNIGE 

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Researchers at the University of Geneva (UNIGE) have developed an artificial intelligence algorithm capable of predicting the risk of cancer metastasis and recurrence with high reliability.
  • Methodology: The team identified specific gene expression signatures in colon cancer cells that drive invasive behavior and trained a predictive model, named MangroveGS, to analyze these genomic patterns across various tumor types to assess metastatic probability.
  • Key Data: After training, the AI model achieved a predictive accuracy of nearly 80% in forecasting the occurrence of metastases, transforming complex genomic data into actionable prognostic information.
  • Significance: This study fundamentally challenges the concept of cancer as "anarchic" cell growth, instead framing it as a distorted form of orderly biological development where suppressed genetic programs are reactivated.
  • Future Application: The algorithm will enable clinicians to stratify patients based on metastatic risk, facilitating personalized treatment strategies and identifying new therapeutic targets to block the spread of tumors.
  • Branch of Science: Oncology, Genetics, and Artificial Intelligence.
  • Additional Detail: The research highlights that metastatic potential is defined by the reactivation of ancient developmental programs, providing a predictable "logic" to tumor progression that can be decoded by AI.

Tuesday, January 20, 2026

Physicists employ AI labmates to supercharge LED light control

Sandia National Laboratories scientists Saaketh Desai, left, and Prasad Iyer, modernized an optics lab with a team of artificial intelligences that learn data, design and run experiments, and interpret results.
 Photo: Credit: Craig Fritz

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: A team of artificial intelligence agents successfully optimized the steering of LED light fourfold in approximately five hours, a task researchers previously estimated would require years of manual experimentation.
  • Methodology: Researchers established a "self-driving lab" utilizing three distinct AI agents: a generative AI to simplify complex data, an active learning agent to autonomously design and execute experiments on optical equipment, and a third "equation learner" AI to derive mathematical formulas validating the results and ensuring interpretability.
  • Key Data: The AI system executed 300 experiments to achieve an average 2.2-times improvement in light steering efficiency across a 74-degree angle, with specific angles showing a fourfold increase in performance compared to previous human-led efforts.
  • Significance: This study demonstrates that AI can transcend mere automation to become a collaborative engine for scientific discovery, solving the "black box" problem by generating verifiable equations that explain the underlying physics of the optimized results.
  • Future Application: Refined control of spontaneous light emission could allow cheaper, smaller, and more efficient LEDs to replace lasers in technologies such as holographic projectors, self-driving cars, and UPC scanners.
  • Branch of Science: Nanophotonics, Optics, and Artificial Intelligence.
  • Additional Detail: The AI agents identified a solution based on a fundamentally new conceptual approach to nanoscale light-material interactions that the human research team had not previously considered.

Thursday, January 15, 2026

Fermilab researchers supercharge neural networks, boosting potential of AI to revolutionize particle physics

Nhan Tran, head of Fermilab’s AI Coordination Office, holds a circuit board used for particle tracker data analysis.
Photo Credit: JJ Starr, Fermilab

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Fermilab researchers led the development of hls4ml, an open-source framework capable of embedding neural networks directly into customized digital hardware.
  • Methodology: The software automatically translates machine learning code from libraries such as PyTorch and TensorFlow into logic gates compatible with field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs).
  • Key Data: Specialized hardware utilizing this framework can execute more than 10 million decisions per second, a necessity for managing the six-fold data increase projected for the High-Luminosity Large Hadron Collider.
  • Significance: By processing algorithms in real time with reduced latency and power usage, the system ensures that critical scientific data is identified and stored rather than discarded during high-volume experiments.
  • Future Application: Primary deployment targets the CMS experiment trigger system, with broader utility in fusion energy research, neuroscience, and materials science.
  • Branch of Science: Particle Physics, Artificial Intelligence, and Microelectronics.

Wednesday, January 14, 2026

A Robot Learns to Lip Sync


Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Columbia Engineering researchers developed a robot that autonomously learns to lip-sync to speech and song through observational learning, bypassing traditional rule-based programming.
  • Methodology: The system utilizes a "vision-to-action" language model (VLA) where the robot first maps its own facial mechanics by watching its reflection, then correlates these movements with human lip dynamics observed in YouTube videos.
  • Specific Detail/Mechanism: The robot features a flexible silicone skin driven by 26 independent motors, allowing it to translate audio signals directly into motor actions without explicit instruction on phoneme shapes.
  • Key Statistic or Data: The robot successfully articulated words in multiple languages and performed songs from an AI-generated album, utilizing training data from thousands of random facial expressions and hours of human video footage.
  • Context or Comparison: Unlike standard humanoids that use rigid, pre-defined facial choreographies, this data-driven approach aims to resolve the "Uncanny Valley" effect by generating fluid, human-like motion.
  • Significance/Future Application: This technology addresses the "missing link" of facial affect in robotics, a critical component for effectively deploying humanoid robots in social roles such as elder care, education, and service industries.

Monday, January 12, 2026

Intraoperative Tumor Histology May Enable More-Effective Cancer Surgeries

From left to right: Images of kidney tissue as detected with UV-PAM, as imaged by AI to mimic traditional H&E staining, and as they appear when directly treated with H&E staining.
Image Credit: Courtesy of California Institute of Technology

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Researchers developed ultraviolet photoacoustic microscopy (UV-PAM) integrated with deep learning to perform rapid, label-free, subcellular-resolution histology on excised tumor tissue directly in the operating room.
  • Mechanism: A low-energy laser excites the absorption peaks of DNA and RNA nucleic acids to generate ultrasonic vibrations; AI algorithms then process these signals to create virtual images that mimic traditional hematoxylin and eosin (H&E) staining without chemical processing.
  • Key Data: The system achieves a spatial resolution of 200 to 300 nanometers and delivers diagnostic results in under 10 minutes (potentially under 5 minutes), effectively identifying the dense, enlarged nuclei characteristic of cancer cells.
  • Context: Unlike standard pathology, which requires time-consuming freezing, fixation, and slicing that can damage fatty tissues like breast tissue, this method preserves sample integrity and eliminates preparation artifacts.
  • Significance: This technology aims to drastically reduce re-operation rates—currently up to one-third for breast cancer lumpectomies—by allowing surgeons to confirm clean tumor margins intraoperatively across various tissue types (breast, bone, skin, organ).

Saturday, January 10, 2026

What Is: Organoid

Organoids: The Science and Ethics of Mini-Organs
Image Credit: Scientific Frontline / AI generated

The "At a Glance" Summary

  • Defining the Architecture: Unlike traditional cell cultures, organoids are 3D structures grown from pluripotent stem cells (iPSCs) or adult stem cells. They rely on the cells' intrinsic ability to self-organize, creating complex structures that mimic the lineage and spatial arrangement of an in vivo organ.
  • The "Avatar" in the Lab: Organoids allow for Personalized Medicine. By growing an organoid from a specific patient's cells, researchers can test drug responses on a "digital twin" of that patient’s tumor or tissue, eliminating the guesswork of trial-and-error prescriptions.
  • Bridge to Clinical Trials: Organoids serve as a critical bridge between the Petri dish and human clinical trials, potentially reducing the failure rate of new drugs and decreasing the reliance on animal testing models which often fail to predict human reactions.
  • The Ethical Frontier: As cerebral organoids (mini-brains) become more complex, exhibiting brain waves similar to preterm infants, science faces a profound question: At what point does biological complexity become sentience?

Thursday, January 8, 2026

How light reflects on leaves may help researchers identify dying forests

Trees at UNDERC
Photo Credit: Barbara Johnston/University of Notre Dame

Early detection of declining forest health is critical for the timely intervention and treatment of droughted and diseased flora, especially in areas prone to wildfires. Obtaining a reliable measure of whole-ecosystem health before it is too late, however, is an ongoing challenge for forest ecologists.

Traditional sampling is too labor-intensive for whole-forest surveys, while modern genomics—though capable of pinpointing active genes—is still too expensive for large-scale application. Remote sensing offers a high-resolution solution from the skies, but currently limited paradigms for data analysis mean the images obtained do not say enough, early enough.

A new study from researchers at the University of Notre Dame, published in Nature: Communications Earth & Environment, uncovers a more comprehensive picture of forest health. Funded by NASA, the research shows that spectral reflectance—a measurement obtained from satellite images—corresponds with the expression of specific genes.

Reflectance is how much light reflects off of leaf material, and at which specific wavelengths, in the visible and near-infrared range. Calculated as the ratio of reflected light to incoming light and measured using special sensors, reflectance data reveals a unique signature specific to the leaf’s composition and condition.

Wednesday, January 7, 2026

Nature-inspired computers are shockingly good at math

Researchers Brad Theilman, center, and Felix Wang, behind, unpack a neuromorphic computing core at Sandia National Laboratories. While the hardware might look similar to a regular computer, the circuitry is radically different. It applies elements of neuroscience to operate more like a brain, which is extremely energy-efficient.
Photo Credit: Craig Fritz

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Neuromorphic (brain-inspired) computing systems have been proven capable of solving partial differential equations (PDEs) with high efficiency, a task previously believed to be the exclusive domain of traditional, energy-intensive supercomputers.
  • Methodology: Researchers at Sandia National Laboratories developed a novel algorithm that utilizes a circuit model based on cortical networks to execute complex mathematical calculations, effectively mapping brain-like architecture to rigorous physical simulations.
  • Theoretical Breakthrough: The study establishes a mathematical link between a computational neuroscience model introduced 12 years ago and the solution of PDEs, demonstrating that neuromorphic hardware can handle deterministic math, not just pattern recognition.
  • Comparison: Unlike conventional supercomputers that require immense power for simulations (such as fluid dynamics or electromagnetic fields), this neuromorphic approach mimics the brain's ability to perform exascale-level computations with minimal energy consumption.
  • Primary Implication: This advancement could enable the development of neuromorphic supercomputers for national security and nuclear stockpile simulations, significantly reducing the energy footprint of critical scientific modeling.
  • Secondary Significance: The findings suggest that "diseases of the brain could be diseases of computation," providing a new framework for understanding neurological conditions by studying how these biological-style networks process information.

Tuesday, January 6, 2026

AI model predicts disease risk while you sleep

SleepFM utilizes diverse physiological data streams, highlighting the potential to improve disease forecasting and better understand health risks.
Image Credit: Scientific Frontline / AI generated (Gemini)

The first artificial intelligence model of its kind can predict more than 100 health conditions from one night’s sleep.

A poor night’s sleep portends a bleary-eyed next day, but it could also hint at diseases that will strike years down the road. A new artificial intelligence model developed by Stanford Medicine researchers and their colleagues can use physiological recordings from one night’s sleep to predict a person’s risk of developing more than 100 health conditions.

Known as SleepFM, the model was trained on nearly 600,000 hours of sleep data collected from 65,000 participants. The sleep data comes from polysomnography, a comprehensive sleep assessment that uses various sensors to record brain activity, heart activity, respiratory signals, leg movements, eye movements, and more.

Monday, December 29, 2025

Machine learning drives drug repurposing for neuroblastoma

Daniel Bexell leads the research group in molecular pediatric oncology, and Katarzyna Radke, first author of the study.
Photo Credit: Lund University

Using machine learning and a large volume of data on genes and existing drugs, researchers at Lund University in Sweden have identified a combination of statins and phenothiazines that is particularly promising in the treatment of the aggressive form of neuroblastoma. The results from experimental trials showed slowing of tumor growth and higher survival rates. 

The childhood cancer, neuroblastoma, affects around 15-20 children in Sweden every year. Most of them fell ill before the age of five. Neuroblastoma is characterized by, among other things, tumors that are often resistant to drug treatment, including chemotherapy. The disease exists in both mild and severe forms, and the Lund University researchers are mainly studying the aggressive form, high-risk neuroblastoma. This variant is the form of childhood cancer with the lowest survival rate. 

Friday, December 26, 2025

The Invisible Scale: Measuring AI’s Return on Energy

The Coin of Energy: Efficiency Paying for Itself
Image Credit: Scientific Frontline

In the public imagination, Artificial Intelligence is often visualized as a chatbot writing a poem or a generator creating a surreal image. This trivializes the technology and magnifies the scrutiny on its energy consumption. When AI is viewed as a toy, its electricity bill seems indefensible.

But when viewed as a scientific instrument—akin to a particle accelerator or an electron microscope—the equation shifts. The question is not "How much power does AI use?" but rather "What is the return on that energy investment?"

When measured across a single human lifetime, the dividends of AI in time, cost, and survival are staggering.

Thursday, December 25, 2025

Why can’t powerful AIs learn basic multiplication?

Image Credit: Scientific Frontline / Stock image

These days, large language models can handle increasingly complex tasks, writing complex code and engaging in sophisticated reasoning. 

But when it comes to four-digit multiplication, a task taught in elementary school, even state-of-the-art systems fail. Why? 

A new paper by University of Chicago computer science Ph.D. student Xiaoyan Bai and faculty co-director of the Data Science Institute's Novel Intelligence Research Initiative Chenhao Tan finds answers by reverse-engineering failure and success.

They worked with collaborators from MIT, Harvard University, University of Waterloo and Google DeepMind to probe AI’s “jagged frontier”—a term for its capacity to excel at complex reasoning yet stumble on seemingly simple tasks.

The Quest for the Synthetic Synapse

Spike Timing" difference (Biology vs. Silicon)
Image Credit: Scientific Frontline

The modern AI revolution is built on a paradox: it is incredibly smart, but thermodynamically reckless. A large language model requires megawatts of power to function, whereas the human brain—which allows you to drive a car, debate philosophy, and regulate a heartbeat simultaneously—runs on roughly 20 watts, the equivalent of a dim lightbulb.

To close this gap, science is moving away from the "Von Neumann" architecture (where memory and processing are separate) toward Neuromorphic Computing—chips that mimic the physical structure of the brain. This report analyzes how close we are to building a "synthetic synapse."

Tuesday, December 23, 2025

Tohoku University and Fujitsu Use AI to Discover Promising New Superconducting Material

The AI technology was utilized to automatically clarify causal relationships from measurement data obtained at NanoTerasu Synchrotron Light Source
Image Credit: Scientific Frontline / stock image

Tohoku University and Fujitsu Limited announced their successful application of AI to derive new insights into the superconductivity mechanism of a new superconducting material. Their findings demonstrate an important use case for AI technology in new materials development and suggests that the technology has the potential to accelerate research and development. This could drive innovation in various industries such as environment and energy, drug discovery and healthcare, and electronic devices.

The two parties used Fujitsu's AI platform Fujitsu Kozuchi to develop a new discovery intelligence technique to accurately estimate causal relationships. Fujitsu will begin offering a trial environment for this technology in March 2026. Furthermore, in collaboration with the Advanced Institute for Materials Research (WPI-AIMR), Tohoku University , the two parties applied this technology to data measured by angle-resolved photoemission spectroscopy (ARPES), an experimental method used in materials research to observe the state of electrons in a material, using a specific superconducting material as a sample.

Monday, December 15, 2025

AI helps explain how covert attention works and uncovers new neuron types

Image Credit: Scientific Frontline / AI generated

Shifting focus on a visual scene without moving our eyes — think driving or reading a room for the reaction to your joke — is a behavior known as covert attention. We do it all the time, but little is known about its neurophysiological foundation. Now, using convolutional neural networks (CNNs), UC Santa Barbara researchers Sudhanshu Srivastava, Miguel Eckstein and William Wang have uncovered the underpinnings of covert attention and, in the process, have found new, emergent neuron types, which they confirmed in real life using data from mouse brain studies. 

“This is a clear case of AI advancing neuroscience, cognitive sciences and psychology,” said Srivastava, a former graduate student in the lab of Eckstein, now a postdoctoral researcher at UC San Diego. 

Monday, November 24, 2025

New Artificial Intelligence Model Could Speed Rare Disease Diagnosis

A DNA strand with a highlighted area indicating a mutation
Image Credit: Scientific Frontline

Every human has tens of thousands of tiny genetic alterations in their DNA, also known as variants, that affect how cells build proteins.

Yet in a given human genome, only a few of these changes are likely to modify proteins in ways that cause disease, which raises a key question: How can scientists find the disease-causing needles in the vast haystack of genetic variants?

For years, scientists have been working on genome-wide association studies and artificial intelligence tools to tackle this question. Now, a new AI model developed by Harvard Medical School researchers and colleagues has pushed forward these efforts. The model, called popEVE, produces a score for each variant in a patient’s genome indicating its likelihood of causing disease and places variants on a continuous spectrum.

Featured Article

What Is: Cosmic Event Horizon

The Final Boundary An illustration of the Cosmic Event Horizon. Unlike the Observable Universe, which is defined by light that has reached u...

Top Viewed Articles