. Scientific Frontline: Computer Science
Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts

Thursday, January 15, 2026

Efficient cooling method could enable chip-based quantum computers

Caption:Researchers developed a photonic chip that incorporates precisely designed antennas to manipulate beams of tightly focused, intersecting light, which can rapidly cool a quantum computing system to someday enable greater efficiency and stability.
Illustration Credit: Michael Hurley and Sampson Wilcox
(CC BY-NC-ND 4.0)

Scientific Frontline: "At a Glance" Summary

  • Core Discovery: Researchers successfully demonstrated a high-efficiency polarization-gradient cooling method integrated directly onto a photonic chip, enabling faster and more effective cooling for trapped-ion quantum computers.
  • Methodology: The system utilizes precisely designed nanoscale antennas connected by waveguides to emit intersecting light beams with diverse polarizations; this creates a rotating light vortex that drastically reduces the kinetic energy of trapped ions.
  • Key Data: The approach achieved ion cooling temperatures nearly 10 times below the standard Doppler limit, reaching this state in approximately 100 microseconds—several times faster than comparable techniques.
  • Context: Unlike traditional quantum setups that rely on bulky external lasers and are sensitive to vibrations, this integrated architecture generates stable optical fields directly on the chip, eliminating the need for complex external optical alignment.
  • Significance: This advancement validates a scalable path for quantum computing where thousands of ion-interface sites can coexist on a single chip, significantly improving the stability and practicality of quantum information processing.
  • Specific Mechanism: The on-chip antennas feature specialized curved notches designed to scatter light upward, maximizing the optical interaction with the ion and allowing for advanced operations beyond simple cooling.

Wednesday, January 7, 2026

Nature-inspired computers are shockingly good at math

Researchers Brad Theilman, center, and Felix Wang, behind, unpack a neuromorphic computing core at Sandia National Laboratories. While the hardware might look similar to a regular computer, the circuitry is radically different. It applies elements of neuroscience to operate more like a brain, which is extremely energy-efficient.
Photo Credit: Craig Fritz

Scientific Frontline: "At a Glance" Summary

  • Main Discovery: Neuromorphic (brain-inspired) computing systems have been proven capable of solving partial differential equations (PDEs) with high efficiency, a task previously believed to be the exclusive domain of traditional, energy-intensive supercomputers.
  • Methodology: Researchers at Sandia National Laboratories developed a novel algorithm that utilizes a circuit model based on cortical networks to execute complex mathematical calculations, effectively mapping brain-like architecture to rigorous physical simulations.
  • Theoretical Breakthrough: The study establishes a mathematical link between a computational neuroscience model introduced 12 years ago and the solution of PDEs, demonstrating that neuromorphic hardware can handle deterministic math, not just pattern recognition.
  • Comparison: Unlike conventional supercomputers that require immense power for simulations (such as fluid dynamics or electromagnetic fields), this neuromorphic approach mimics the brain's ability to perform exascale-level computations with minimal energy consumption.
  • Primary Implication: This advancement could enable the development of neuromorphic supercomputers for national security and nuclear stockpile simulations, significantly reducing the energy footprint of critical scientific modeling.
  • Secondary Significance: The findings suggest that "diseases of the brain could be diseases of computation," providing a new framework for understanding neurological conditions by studying how these biological-style networks process information.

Thursday, December 25, 2025

The Quest for the Synthetic Synapse

Spike Timing" difference (Biology vs. Silicon)
Image Credit: Scientific Frontline

The modern AI revolution is built on a paradox: it is incredibly smart, but thermodynamically reckless. A large language model requires megawatts of power to function, whereas the human brain—which allows you to drive a car, debate philosophy, and regulate a heartbeat simultaneously—runs on roughly 20 watts, the equivalent of a dim lightbulb.

To close this gap, science is moving away from the "Von Neumann" architecture (where memory and processing are separate) toward Neuromorphic Computing—chips that mimic the physical structure of the brain. This report analyzes how close we are to building a "synthetic synapse."

Tuesday, December 9, 2025

Breakthrough could connect quantum computers at 200 times longer distance

New research from University of Chicago Pritzker School of Molecular Engineering Asst. Prof. Tian Zhong could make it possible for quantum computers to connect at distances up to 1,243 miles, shattering previous records.
Photo Credit: Jason Smith

A new nanofabrication approach could increase the range of quantum networks from a few kilometers to a potential 2,000 km, bringing quantum internet closer than ever

Quantum computers are powerful, lightning-fast and notoriously difficult to connect to one another over long distances. 

Previously, the maximum distance two quantum computers could connect through a fiber cable was a few kilometers. This means that, even if such cable were run between them, quantum computers in downtown Chicago’s Willis Tower and the University of Chicago Pritzker School of Molecular Engineering (UChicago PME) on the South Side would be too far apart to communicate with each other. 

Monday, November 17, 2025

Researchers Unveil First-Ever Defense Against Cryptanalytic Attacks on AI

Image Credit: Scientific Frontline

Security researchers have developed the first functional defense mechanism capable of protecting against “cryptanalytic” attacks used to “steal” the model parameters that define how an AI system works.

“AI systems are valuable intellectual property, and cryptanalytic parameter extraction attacks are the most efficient, effective, and accurate way to ‘steal’ that intellectual property,” says Ashley Kurian, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Until now, there has been no way to defend against those attacks. Our technique effectively protects against these attacks.”

“Cryptanalytic attacks are already happening, and they’re becoming more frequent and more efficient,” says Aydin Aysu, corresponding author of the paper and an associate professor of electrical and computer engineering at NC State. “We need to implement defense mechanisms now, because implementing them after an AI model’s parameters have been extracted is too late.”

Saturday, November 15, 2025

Computer Science: In-Depth Description

Photo Credit: Massimo Botturi

Computer Science is the systematic study of computation, information, and automation, focusing on algorithmic processes, computational machines, and their application. Its primary goals are to understand the theoretical foundations of what can be computed, to design and implement hardware and software systems for processing information, and to apply computational thinking to solve complex problems across all domains of human endeavor.

Thursday, February 6, 2025

First distributed quantum algorithm brings quantum supercomputers closer

Dougal Main and Beth Nichol working on the distributed quantum computer.
Photo Credit: John Cairns.

In a milestone that brings quantum computing tangibly closer to large-scale practical use, scientists at Oxford University’s Department of Physics have demonstrated the first instance of distributed quantum computing. Using a photonic network interface, they successfully linked two separate quantum processors to form a single, fully connected quantum computer, paving the way to tackling computational challenges previously out of reach. The results have been published in Nature. 

The breakthrough addresses quantum’s ‘scalability problem’: a quantum computer powerful enough to be industry-disrupting would have to be capable of processing millions of qubits. Packing all these processors in a single device, however, would require a machine of an immense size. In this new approach, small quantum devices are linked together, enabling computations to be distributed across the network. In theory, there is no limit to the number of processors that could be in the network.  

Thursday, January 23, 2025

Better prediction of epidemics

The curve calculated using a “reproduction matrix” (turquoise) reflects the actual infection rate (black) much more accurately than previous models (yellow and blue).
Graphic Credit: Empa

The reproduction number R is often used as an indicator to predict how quickly an infectious disease will spread. Empa researchers have developed a mathematical model that is just as easy to use but enables more accurate predictions than R. Their model is based on a reproduction matrix that takes into account the heterogeneity of society.

"Your friends have more friends than you do", wrote the US sociologist Scott Feld in 1991. Feld's so-called friendship paradox states that the friends of any given person have more friends on average than the person themselves. This is based on a simple probability calculation: Well-connected people are more likely to appear in other people's social circles. "If you look at any person's circle of friends, it is very likely that this circle contains very well-connected people with an above-average number of friends," explains Empa researcher Ivan Lunati, head of the Computational Engineering laboratory. A similar principle served Lunati and his team as the basis for a new mathematical model that can be used to more accurately predict the development of case numbers during an epidemic.

Monday, March 25, 2024

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Caption:Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Image Credit: Copilot / DALL-E 3 / AI generated from Scientific Frontline prompts

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Tuesday, February 20, 2024

Scientists use Summit supercomputer to explore exotic stellar phenomena

Astrophysicists at the State University of New York, Stony Brook, and University of California, Berkeley created 3D simulations of X-ray bursts on the surfaces of neutron stars. Two views of these X-ray bursts are shown: the left column is viewed from above while the right column shows it from a shallow angle above the surface. The panels (from top to bottom) show the X-ray burst structure at 10 milliseconds, 20 milliseconds and 40 milliseconds of simulation time.
Image Credit: Michael Zingale/Department of Physics and Astronomy at SUNY Stony Brook.

Understanding how a thermonuclear flame spreads across the surface of a neutron star — and what that spreading can tell us about the relationship between the neutron star’s mass and its radius — can also reveal much about the star’s composition. 

Neutron stars — the compact remnants of supernova explosions — are found throughout the universe. Because most stars are in binary systems, it is possible for a neutron star to have a stellar companion. X-ray bursts occur when matter accretes on the surface of the neutron star from its companion and is compressed by the intense gravity of the neutron star, resulting in a thermonuclear explosion. 

Astrophysicists at the State University of New York, Stony Brook, and University of California, Berkeley, used the Oak Ridge Leadership Computing Facility’s Summit supercomputer, located at the Department of Energy’s Oak Ridge National Laboratory, to compare models of X-ray bursts in 2D and 3D. 

“We can see these events happen in finer detail with a simulation. One of the things we want to do is understand the properties of the neutron star because we want to understand how matter behaves at the extreme densities you would find in a neutron star,” said Michael Zingale, a professor in the Department of Physics and Astronomy at SUNY Stony Brook who led the project.

Wednesday, February 14, 2024

New Algorithm Disentangles Intrinsic Brain Patterns from Sensory Inputs

Image Credit: Omid Sani, Using Generative Ai

Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.

The work has been published in the Proceedings of the National Academy of Sciences (PNAS).

When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.

That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.

Wednesday, December 20, 2023

Cosmic lights in the forest

TACC’s Frontera, the fastest academic supercomputer in the US, is a strategic national capability computing system funded by the National Science Foundation.
Photo Credit: TACC.

Like a celestial beacon, distant quasars make the brightest light in the universe. They emit more light than our entire Milky Way galaxy. The light comes from matter ripped apart as it is swallowed by a supermassive black hole. Quasar light reveals clues about the large-scale structure of the universe as it shines through enormous clouds of neutral hydrogen gas formed shortly after the Big Bang on the scale of 20 million light years across or more. 

Using quasar light data, the National Science Foundation (NSF)-funded Frontera supercomputer at the Texas Advanced Computing Center (TACC) helped astronomers develop PRIYA, the largest suite of hydrodynamic simulations yet made for simulating large-scale structure in the universe.

“We’ve created a new simulation model to compare data that exists at the real universe,” said Simeon Bird, an assistant professor in astronomy at the University of California, Riverside. 

Bird and colleagues developed PRIYA, which takes optical light data from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey (SDSS). He and colleagues published their work announcing PRIYA in the Journal of Cosmology and Astroparticle Physics (JCAP). 

Thursday, December 14, 2023

Custom software speeds up, stabilizes high-profile ocean model

The illustration depicts ocean surface currents simulated by MPAS-Ocean.
Illustration Credit: Los Alamos National Laboratory, E3SM, U.S. Dept. of Energy

On the beach, ocean waves provide soothing white noise. But in scientific laboratories, they play a key role in weather forecasting and climate research. Along with the atmosphere, the ocean is typically one of the largest and most computationally demanding components of Earth system models like the Department of Energy’s Energy Exascale Earth System Model, or E3SM.

Most modern ocean models focus on two categories of waves: a barotropic system, which has a fast wave propagation speed, and a baroclinic system, which has a slow wave propagation speed. To help address the challenge of simulating these two modes simultaneously, a team from DOE’s Oak Ridge, Los Alamos and Sandia National Laboratories has developed a new solver algorithm that reduces the total run time of the Model for Prediction Across Scales-Ocean, or MPAS-Ocean, E3SM’s ocean circulation model, by 45%. 

The researchers tested their software on the Summit supercomputer at ORNL’s Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility, and the Compy supercomputer at Pacific Northwest National Laboratory. They ran their primary simulations on the Cori and Perlmutter supercomputers at Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center, and their results were published in the International Journal of High Performance Computing Applications.

Tuesday, October 3, 2023

AI copilot enhances human precision for safer aviation

With Air-Guardian, a computer program can track where a human pilot is looking (using eye-tracking technology), so it can better understand what the pilot is focusing on. This helps the computer make better decisions that are in line with what the pilot is doing or intending to do.
Illustration Credit: Alex Shipps/MIT CSAIL via Midjourney

Imagine you're in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they're always looking out for different things. If they're both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive copilot; a partnership between human and machine, rooted in understanding attention.

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the neural system, it relies on something called "saliency maps," which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems. 

Monday, June 19, 2023

GE Aerospace runs one of the world’s largest supercomputer simulations to test revolutionary new open fan engine architecture

CFM’s RISE open fan engine architecture.
Image Credit: GE Aerospace

To support the development of a revolutionary new open fan engine architecture for the future of flight, GE Aerospace has run simulations using the world’s fastest supercomputer capable of crunching data in excess of exascale speed, or more than a quintillion calculations per second.

To model engine performance and noise levels, GE Aerospace created software capable of operating on Frontier, a recently commissioned supercomputer at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory with processing power of about 37,000 GPUs. For comparison, Frontier’s processing speed is so powerful, it would take every person on Earth combined more than four years to do what the supercomputer can in one second.  

By coupling GE Aerospace’s computational fluid dynamics software with Frontier, GE was able to simulate air movement of a full-scale open fan design with incredible detail.

“Developing game-changing new aircraft engines requires game-changing technical capabilities. With supercomputing, GE Aerospace engineers are redefining the future of flight and solving problems that would have previously been impossible,” said Mohamed Ali, vice president and general manager of engineering for GE Aerospace.

Tuesday, June 13, 2023

AI helps show how the brain’s fluids flow

A video shows a perivascular space (area within white lines) into which the researchers injected tiny particles. The particles (shown as moving dots) are trailed by lines which indicate their direction. Having measured the position and velocity of the particles over time, the team then integrated this 2D video with physics-informed neural networks to create an unprecedented high-resolution, 3D look at the brain’s fluid flow system.
Video Credit: Douglas Kelley

New research targets diseases including Alzheimer’s.

A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.

The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo.

A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow. The results are outlined in a study published by Proceedings of the National Academy of Sciences.

Summit study fathoms troubled waters of ocean turbulence

Simulations performed on Oak Ridge National Laboratory’s Summit supercomputer generated one of the most detailed portraits to date of how turbulence disperses heat through ocean water under realistic conditions.
Image Credit: Miles Couchman

Simulations performed on the Summit supercomputer at the Department of Energy’s Oak Ridge National Laboratory revealed new insights into the role of turbulence in mixing fluids and could open new possibilities for projecting climate change and studying fluid dynamics.

The study, published in the Journal of Turbulence, used Summit to model the dynamics of a roughly 10-meter section of ocean. That study generated one of the most detailed simulations to date of how turbulence disperses heat through seawater under realistic conditions. The lessons learned can apply to other substances, such as pollution spreading through water or air.

“We’ve never been able to do this type of analysis before, partly because we couldn’t get samples at the necessary size,” said Miles Couchman, co-author and a postdoc at the University of Cambridge. “We needed a machine-like Summit that could allow us to observe these details across the vast range of relevant scales.”

Monday, June 5, 2023

Computational model mimics humans’ ability to predict emotions

While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, says MIT Professor Rebecca Saxe. Much more important is the ability to predict someone’s emotional response to events before they occur.
Image Credit: Christine Daniloff, MIT
(CC BY-NC-ND 3.0)

When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.

MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them. 

To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.

“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Friday, April 14, 2023

Location intelligence shines a light on disinformation

Each dot represents a Twitterer discussing COVID-19 from April 16 to April 22, 2021. The closer the dots are to the center, the greater the influence. The brighter the color, the stronger the intent.
Image Credit: ORNL

Using disinformation to create political instability and battlefield confusion dates back millennia.

However, today’s disinformation actors use social media to amplify disinformation that users knowingly or, more often, unknowingly perpetuate. Such disinformation spreads quickly, threatening public health and safety. Indeed, the COVID-19 pandemic and recent global elections have given the world a front-row seat to this form of modern warfare.

A group at ORNL now studies such threats thanks to the evolution at the lab of location intelligence, or research that uses open data to understand places and the factors that influence human activity in them. In the past, location intelligence has informed emergency response, urban planning, transportation planning, energy conservation and policy decisions. Now, location intelligence at ORNL also helps identify disinformation, or shared information that is intentionally misleading, and its impacts.

Wednesday, April 12, 2023

ORNL, NOAA launch new supercomputer for climate science research

Photo Credit: Genevieve Martin/ORNL

Oak Ridge National Laboratory, in partnership with the National Oceanic and Atmospheric Administration, is launching a new supercomputer dedicated to climate science research. The new system is the fifth supercomputer to be installed and run by the National Climate-Computing Research Center at ORNL.

The NCRC was established in 2009 as part of a strategic partnership between NOAA and the U.S. Department of Energy and is responsible for the procurement, installation, testing and operation of several supercomputers dedicated to climate modeling and simulations. The goal of the partnership is to increase NOAA’s climate modeling capabilities to further critical climate research. To that end, the NCRC has installed a series of increasingly powerful computers since 2010, each of them formally named Gaea. The latest system, also referred to as C5, is an HPE Cray machine with over 10 petaflops — or 10 million billion calculations per second — of peak theoretical performance — almost double the power of the two previous systems combined.

Featured Article

What Is: Nuclear Winter

A Planetary System Collapse Image Credit: Scientific Frontline Scientific Frontline: Extended"At a Glance" Summary The Core Concep...

Top Viewed Articles