. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, January 20, 2026

Physicists employ AI labmates to supercharge LED light control

Sandia National Laboratories scientists Saaketh Desai, left, and Prasad Iyer, modernized an optics lab with a team of artificial intelligences that learn data, design and run experiments, and interpret results.
 Photo: Credit: Craig Fritz

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: A team of artificial intelligence agents successfully optimized the steering of LED light fourfold in approximately five hours, a task researchers previously estimated would require years of manual experimentation.
  • Methodology: Researchers established a "self-driving lab" utilizing three distinct AI agents: a generative AI to simplify complex data, an active learning agent to autonomously design and execute experiments on optical equipment, and a third "equation learner" AI to derive mathematical formulas validating the results and ensuring interpretability.
  • Key Data: The AI system executed 300 experiments to achieve an average 2.2-times improvement in light steering efficiency across a 74-degree angle, with specific angles showing a fourfold increase in performance compared to previous human-led efforts.
  • Significance: This study demonstrates that AI can transcend mere automation to become a collaborative engine for scientific discovery, solving the "black box" problem by generating verifiable equations that explain the underlying physics of the optimized results.
  • Future Application: Refined control of spontaneous light emission could allow cheaper, smaller, and more efficient LEDs to replace lasers in technologies such as holographic projectors, self-driving cars, and UPC scanners.
  • Branch of Science: Nanophotonics, Optics, and Artificial Intelligence.
  • Additional Detail: The AI agents identified a solution based on a fundamentally new conceptual approach to nanoscale light-material interactions that the human research team had not previously considered.

Thursday, January 15, 2026

Fermilab researchers supercharge neural networks, boosting potential of AI to revolutionize particle physics

Nhan Tran, head of Fermilab’s AI Coordination Office, holds a circuit board used for particle tracker data analysis.
Photo Credit: JJ Starr, Fermilab

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Fermilab researchers led the development of hls4ml, an open-source framework capable of embedding neural networks directly into customized digital hardware.
  • Methodology: The software automatically translates machine learning code from libraries such as PyTorch and TensorFlow into logic gates compatible with field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs).
  • Key Data: Specialized hardware utilizing this framework can execute more than 10 million decisions per second, a necessity for managing the six-fold data increase projected for the High-Luminosity Large Hadron Collider.
  • Significance: By processing algorithms in real time with reduced latency and power usage, the system ensures that critical scientific data is identified and stored rather than discarded during high-volume experiments.
  • Future Application: Primary deployment targets the CMS experiment trigger system, with broader utility in fusion energy research, neuroscience, and materials science.
  • Branch of Science: Particle Physics, Artificial Intelligence, and Microelectronics.

Wednesday, January 14, 2026

A Robot Learns to Lip Sync


Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Columbia Engineering researchers developed a robot that autonomously learns to lip-sync to speech and song through observational learning, bypassing traditional rule-based programming.
  • Methodology: The system utilizes a "vision-to-action" language model (VLA) where the robot first maps its own facial mechanics by watching its reflection, then correlates these movements with human lip dynamics observed in YouTube videos.
  • Specific Detail/Mechanism: The robot features a flexible silicone skin driven by 26 independent motors, allowing it to translate audio signals directly into motor actions without explicit instruction on phoneme shapes.
  • Key Statistic or Data: The robot successfully articulated words in multiple languages and performed songs from an AI-generated album, utilizing training data from thousands of random facial expressions and hours of human video footage.
  • Context or Comparison: Unlike standard humanoids that use rigid, pre-defined facial choreographies, this data-driven approach aims to resolve the "Uncanny Valley" effect by generating fluid, human-like motion.
  • Significance/Future Application: This technology addresses the "missing link" of facial affect in robotics, a critical component for effectively deploying humanoid robots in social roles such as elder care, education, and service industries.

Monday, January 12, 2026

Intraoperative Tumor Histology May Enable More-Effective Cancer Surgeries

From left to right: Images of kidney tissue as detected with UV-PAM, as imaged by AI to mimic traditional H&E staining, and as they appear when directly treated with H&E staining.
Image Credit: Courtesy of California Institute of Technology

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Researchers developed ultraviolet photoacoustic microscopy (UV-PAM) integrated with deep learning to perform rapid, label-free, subcellular-resolution histology on excised tumor tissue directly in the operating room.
  • Mechanism: A low-energy laser excites the absorption peaks of DNA and RNA nucleic acids to generate ultrasonic vibrations; AI algorithms then process these signals to create virtual images that mimic traditional hematoxylin and eosin (H&E) staining without chemical processing.
  • Key Data: The system achieves a spatial resolution of 200 to 300 nanometers and delivers diagnostic results in under 10 minutes (potentially under 5 minutes), effectively identifying the dense, enlarged nuclei characteristic of cancer cells.
  • Context: Unlike standard pathology, which requires time-consuming freezing, fixation, and slicing that can damage fatty tissues like breast tissue, this method preserves sample integrity and eliminates preparation artifacts.
  • Significance: This technology aims to drastically reduce re-operation rates—currently up to one-third for breast cancer lumpectomies—by allowing surgeons to confirm clean tumor margins intraoperatively across various tissue types (breast, bone, skin, organ).

Saturday, January 10, 2026

What Is: Organoid

Organoids: The Science and Ethics of Mini-Organs
Image Credit: Scientific Frontline / AI generated

The "At a Glance" Summary

  • Defining the Architecture: Unlike traditional cell cultures, organoids are 3D structures grown from pluripotent stem cells (iPSCs) or adult stem cells. They rely on the cells' intrinsic ability to self-organize, creating complex structures that mimic the lineage and spatial arrangement of an in vivo organ.
  • The "Avatar" in the Lab: Organoids allow for Personalized Medicine. By growing an organoid from a specific patient's cells, researchers can test drug responses on a "digital twin" of that patient’s tumor or tissue, eliminating the guesswork of trial-and-error prescriptions.
  • Bridge to Clinical Trials: Organoids serve as a critical bridge between the Petri dish and human clinical trials, potentially reducing the failure rate of new drugs and decreasing the reliance on animal testing models which often fail to predict human reactions.
  • The Ethical Frontier: As cerebral organoids (mini-brains) become more complex, exhibiting brain waves similar to preterm infants, science faces a profound question: At what point does biological complexity become sentience?

Thursday, January 8, 2026

How light reflects on leaves may help researchers identify dying forests

Trees at UNDERC
Photo Credit: Barbara Johnston/University of Notre Dame

Early detection of declining forest health is critical for the timely intervention and treatment of droughted and diseased flora, especially in areas prone to wildfires. Obtaining a reliable measure of whole-ecosystem health before it is too late, however, is an ongoing challenge for forest ecologists.

Traditional sampling is too labor-intensive for whole-forest surveys, while modern genomics—though capable of pinpointing active genes—is still too expensive for large-scale application. Remote sensing offers a high-resolution solution from the skies, but currently limited paradigms for data analysis mean the images obtained do not say enough, early enough.

A new study from researchers at the University of Notre Dame, published in Nature: Communications Earth & Environment, uncovers a more comprehensive picture of forest health. Funded by NASA, the research shows that spectral reflectance—a measurement obtained from satellite images—corresponds with the expression of specific genes.

Reflectance is how much light reflects off of leaf material, and at which specific wavelengths, in the visible and near-infrared range. Calculated as the ratio of reflected light to incoming light and measured using special sensors, reflectance data reveals a unique signature specific to the leaf’s composition and condition.

Wednesday, January 7, 2026

Nature-inspired computers are shockingly good at math

Researchers Brad Theilman, center, and Felix Wang, behind, unpack a neuromorphic computing core at Sandia National Laboratories. While the hardware might look similar to a regular computer, the circuitry is radically different. It applies elements of neuroscience to operate more like a brain, which is extremely energy-efficient.
Photo Credit: Craig Fritz

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Neuromorphic (brain-inspired) computing systems have been proven capable of solving partial differential equations (PDEs) with high efficiency, a task previously believed to be the exclusive domain of traditional, energy-intensive supercomputers.
  • Methodology: Researchers at Sandia National Laboratories developed a novel algorithm that utilizes a circuit model based on cortical networks to execute complex mathematical calculations, effectively mapping brain-like architecture to rigorous physical simulations.
  • Theoretical Breakthrough: The study establishes a mathematical link between a computational neuroscience model introduced 12 years ago and the solution of PDEs, demonstrating that neuromorphic hardware can handle deterministic math, not just pattern recognition.
  • Comparison: Unlike conventional supercomputers that require immense power for simulations (such as fluid dynamics or electromagnetic fields), this neuromorphic approach mimics the brain's ability to perform exascale-level computations with minimal energy consumption.
  • Primary Implication: This advancement could enable the development of neuromorphic supercomputers for national security and nuclear stockpile simulations, significantly reducing the energy footprint of critical scientific modeling.
  • Secondary Significance: The findings suggest that "diseases of the brain could be diseases of computation," providing a new framework for understanding neurological conditions by studying how these biological-style networks process information.

Tuesday, January 6, 2026

AI model predicts disease risk while you sleep

SleepFM utilizes diverse physiological data streams, highlighting the potential to improve disease forecasting and better understand health risks.
Image Credit: Scientific Frontline / AI generated (Gemini)

The first artificial intelligence model of its kind can predict more than 100 health conditions from one night’s sleep.

A poor night’s sleep portends a bleary-eyed next day, but it could also hint at diseases that will strike years down the road. A new artificial intelligence model developed by Stanford Medicine researchers and their colleagues can use physiological recordings from one night’s sleep to predict a person’s risk of developing more than 100 health conditions.

Known as SleepFM, the model was trained on nearly 600,000 hours of sleep data collected from 65,000 participants. The sleep data comes from polysomnography, a comprehensive sleep assessment that uses various sensors to record brain activity, heart activity, respiratory signals, leg movements, eye movements, and more.

Monday, December 29, 2025

Machine learning drives drug repurposing for neuroblastoma

Daniel Bexell leads the research group in molecular pediatric oncology, and Katarzyna Radke, first author of the study.
Photo Credit: Lund University

Using machine learning and a large volume of data on genes and existing drugs, researchers at Lund University in Sweden have identified a combination of statins and phenothiazines that is particularly promising in the treatment of the aggressive form of neuroblastoma. The results from experimental trials showed slowing of tumor growth and higher survival rates. 

The childhood cancer, neuroblastoma, affects around 15-20 children in Sweden every year. Most of them fell ill before the age of five. Neuroblastoma is characterized by, among other things, tumors that are often resistant to drug treatment, including chemotherapy. The disease exists in both mild and severe forms, and the Lund University researchers are mainly studying the aggressive form, high-risk neuroblastoma. This variant is the form of childhood cancer with the lowest survival rate. 

Friday, December 26, 2025

The Invisible Scale: Measuring AI’s Return on Energy

The Coin of Energy: Efficiency Paying for Itself
Image Credit: Scientific Frontline

In the public imagination, Artificial Intelligence is often visualized as a chatbot writing a poem or a generator creating a surreal image. This trivializes the technology and magnifies the scrutiny on its energy consumption. When AI is viewed as a toy, its electricity bill seems indefensible.

But when viewed as a scientific instrument—akin to a particle accelerator or an electron microscope—the equation shifts. The question is not "How much power does AI use?" but rather "What is the return on that energy investment?"

When measured across a single human lifetime, the dividends of AI in time, cost, and survival are staggering.

Thursday, December 25, 2025

Why can’t powerful AIs learn basic multiplication?

Image Credit: Scientific Frontline / Stock image

These days, large language models can handle increasingly complex tasks, writing complex code and engaging in sophisticated reasoning. 

But when it comes to four-digit multiplication, a task taught in elementary school, even state-of-the-art systems fail. Why? 

A new paper by University of Chicago computer science Ph.D. student Xiaoyan Bai and faculty co-director of the Data Science Institute's Novel Intelligence Research Initiative Chenhao Tan finds answers by reverse-engineering failure and success.

They worked with collaborators from MIT, Harvard University, University of Waterloo and Google DeepMind to probe AI’s “jagged frontier”—a term for its capacity to excel at complex reasoning yet stumble on seemingly simple tasks.

The Quest for the Synthetic Synapse

Spike Timing" difference (Biology vs. Silicon)
Image Credit: Scientific Frontline

The modern AI revolution is built on a paradox: it is incredibly smart, but thermodynamically reckless. A large language model requires megawatts of power to function, whereas the human brain—which allows you to drive a car, debate philosophy, and regulate a heartbeat simultaneously—runs on roughly 20 watts, the equivalent of a dim lightbulb.

To close this gap, science is moving away from the "Von Neumann" architecture (where memory and processing are separate) toward Neuromorphic Computing—chips that mimic the physical structure of the brain. This report analyzes how close we are to building a "synthetic synapse."

Tuesday, December 23, 2025

Tohoku University and Fujitsu Use AI to Discover Promising New Superconducting Material

The AI technology was utilized to automatically clarify causal relationships from measurement data obtained at NanoTerasu Synchrotron Light Source
Image Credit: Scientific Frontline / stock image

Tohoku University and Fujitsu Limited announced their successful application of AI to derive new insights into the superconductivity mechanism of a new superconducting material. Their findings demonstrate an important use case for AI technology in new materials development and suggests that the technology has the potential to accelerate research and development. This could drive innovation in various industries such as environment and energy, drug discovery and healthcare, and electronic devices.

The two parties used Fujitsu's AI platform Fujitsu Kozuchi to develop a new discovery intelligence technique to accurately estimate causal relationships. Fujitsu will begin offering a trial environment for this technology in March 2026. Furthermore, in collaboration with the Advanced Institute for Materials Research (WPI-AIMR), Tohoku University , the two parties applied this technology to data measured by angle-resolved photoemission spectroscopy (ARPES), an experimental method used in materials research to observe the state of electrons in a material, using a specific superconducting material as a sample.

Monday, December 15, 2025

AI helps explain how covert attention works and uncovers new neuron types

Image Credit: Scientific Frontline / AI generated

Shifting focus on a visual scene without moving our eyes — think driving or reading a room for the reaction to your joke — is a behavior known as covert attention. We do it all the time, but little is known about its neurophysiological foundation. Now, using convolutional neural networks (CNNs), UC Santa Barbara researchers Sudhanshu Srivastava, Miguel Eckstein and William Wang have uncovered the underpinnings of covert attention and, in the process, have found new, emergent neuron types, which they confirmed in real life using data from mouse brain studies. 

“This is a clear case of AI advancing neuroscience, cognitive sciences and psychology,” said Srivastava, a former graduate student in the lab of Eckstein, now a postdoctoral researcher at UC San Diego. 

Monday, November 24, 2025

New Artificial Intelligence Model Could Speed Rare Disease Diagnosis

A DNA strand with a highlighted area indicating a mutation
Image Credit: Scientific Frontline

Every human has tens of thousands of tiny genetic alterations in their DNA, also known as variants, that affect how cells build proteins.

Yet in a given human genome, only a few of these changes are likely to modify proteins in ways that cause disease, which raises a key question: How can scientists find the disease-causing needles in the vast haystack of genetic variants?

For years, scientists have been working on genome-wide association studies and artificial intelligence tools to tackle this question. Now, a new AI model developed by Harvard Medical School researchers and colleagues has pushed forward these efforts. The model, called popEVE, produces a score for each variant in a patient’s genome indicating its likelihood of causing disease and places variants on a continuous spectrum.

Monday, November 17, 2025

Researchers Unveil First-Ever Defense Against Cryptanalytic Attacks on AI

Image Credit: Scientific Frontline

Security researchers have developed the first functional defense mechanism capable of protecting against “cryptanalytic” attacks used to “steal” the model parameters that define how an AI system works.

“AI systems are valuable intellectual property, and cryptanalytic parameter extraction attacks are the most efficient, effective, and accurate way to ‘steal’ that intellectual property,” says Ashley Kurian, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Until now, there has been no way to defend against those attacks. Our technique effectively protects against these attacks.”

“Cryptanalytic attacks are already happening, and they’re becoming more frequent and more efficient,” says Aydin Aysu, corresponding author of the paper and an associate professor of electrical and computer engineering at NC State. “We need to implement defense mechanisms now, because implementing them after an AI model’s parameters have been extracted is too late.”

Sunday, November 9, 2025

Artificial Intelligence: In-Depth Description

Futuristic AI mainframe
Image Credit: Scientific Frontline / AI Generated

Artificial Intelligence (AI) is a wide-ranging branch of computer science focused on building smart machines capable of performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and language comprehension. The primary goal is not just to mimic human thought but to create systems that can learn from data, identify patterns, and make autonomous decisions to solve complex problems, often with greater speed and accuracy than humans.

Monday, October 27, 2025

Rebalancing the Gut: How AI Solved a 25-Year Crohn’s Disease Mystery

Electron micrographs show how macrophages expressing girdin neutralize pathogens by fusing phagosomes (P) with the cell’s lysosomes (L) to form phagolysosomes (PL), compartments where pathogens and cellular debris are broken down (left). This process is crucial for maintaining cellular homeostasis. In the absence of girdin, this fusion fails, allowing pathogens to evade degradation and escape neutralization (right).
Image Credit: UC San Diego Health Sciences

The human gut contains two types of macrophages, or specialized white blood cells, that have very different but equally important roles in maintaining balance in the digestive system. Inflammatory macrophages fight microbial infections, while non-inflammatory macrophages repair damaged tissue. In Crohn’s disease — a form of inflammatory bowel disease (IBD) — an imbalance between these two types of macrophages can result in chronic gut inflammation, damaging the intestinal wall and causing pain and other symptoms. 

Researchers at University of California San Diego School of Medicine have developed a new approach that integrates artificial intelligence (AI) with advanced molecular biology techniques to decode what determines whether a macrophage will become inflammatory or non-inflammatory. 

The study also resolves a longstanding mystery surrounding the role of a gene called NOD2 in this decision-making process. NOD2 was discovered in 2001 and is the first gene linked to a heightened risk for Crohn’s disease.

Wednesday, October 22, 2025

Researchers Explore How AI Could Shape the Future of Student Learning

Johns Hopkins study reveals the strengths and pitfalls of incorporating chatbots into middle and high school classrooms as a 'co-tutor'
Image Credit: Scientific Frontline / AI generated

As students settle into the new school year, one question looms large: How will artificial intelligence tools like ChatGPT affect their learning? Seeking answers, a team from Johns Hopkins recently introduced a chatbot into a classroom of middle and high school students to act as a co-tutor and study the impact.

The pilot study included 22 students enrolled in the Johns Hopkins Center for Talented Youth's online course Diagnosis: Be the Doctor. It involved two virtual classrooms; both were taught by the same instructor and organized similarly, except for one key difference: Students in one classroom had access to a large language model designed to act like a coach, asking Socratic-style questions as students worked through medical case studies.

Monday, October 20, 2025

New AI Model for Drug Design Brings More Physics to Bear in Predictions

This illustration shows the mesh of anchoring points the team obtained by discretizing the manifold, an estimation of the distribution of atoms and the probable locations of electrons in the molecule. This is important because, as the authors note in the new paper, treating atoms as solid points "does not fully reflect the spatial extent that real atoms occupy in three-dimensional space."
Image Credit: Liu et al./PNAS

When machine learning is used to suggest new potential scientific insights or directions, algorithms sometimes offer solutions that are not physically sound. Take for example AlphaFold, the AI system that predicts the complex ways in which amino acid chains will fold into 3D protein structures. The system sometimes suggests "unphysical" folds—configurations that are implausible based on the laws of physics—especially when asked to predict the folds for chains that are significantly different from its training data. To limit this type of unphysical result in the realm of drug design, Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences at Caltech, and her colleagues have introduced a new machine learning model called NucleusDiff, which incorporates a simple physical idea into its training, greatly improving the algorithm's performance.

Featured Article

Hidden magma oceans could shield rocky exoplanets from harmful radiation

UNDER ARMOR? Deep layers of molten rock inside some super-earths could generate powerful magnetic fields—potentially stronger than Earth’s—a...

Top Viewed Articles