. Scientific Frontline: Computer Science
Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts

Tuesday, February 20, 2024

Scientists use Summit supercomputer to explore exotic stellar phenomena

Astrophysicists at the State University of New York, Stony Brook, and University of California, Berkeley created 3D simulations of X-ray bursts on the surfaces of neutron stars. Two views of these X-ray bursts are shown: the left column is viewed from above while the right column shows it from a shallow angle above the surface. The panels (from top to bottom) show the X-ray burst structure at 10 milliseconds, 20 milliseconds and 40 milliseconds of simulation time.
Image Credit: Michael Zingale/Department of Physics and Astronomy at SUNY Stony Brook.

Understanding how a thermonuclear flame spreads across the surface of a neutron star — and what that spreading can tell us about the relationship between the neutron star’s mass and its radius — can also reveal much about the star’s composition. 

Neutron stars — the compact remnants of supernova explosions — are found throughout the universe. Because most stars are in binary systems, it is possible for a neutron star to have a stellar companion. X-ray bursts occur when matter accretes on the surface of the neutron star from its companion and is compressed by the intense gravity of the neutron star, resulting in a thermonuclear explosion. 

Astrophysicists at the State University of New York, Stony Brook, and University of California, Berkeley, used the Oak Ridge Leadership Computing Facility’s Summit supercomputer, located at the Department of Energy’s Oak Ridge National Laboratory, to compare models of X-ray bursts in 2D and 3D. 

“We can see these events happen in finer detail with a simulation. One of the things we want to do is understand the properties of the neutron star because we want to understand how matter behaves at the extreme densities you would find in a neutron star,” said Michael Zingale, a professor in the Department of Physics and Astronomy at SUNY Stony Brook who led the project.

Wednesday, February 14, 2024

New Algorithm Disentangles Intrinsic Brain Patterns from Sensory Inputs

Image Credit: Omid Sani, Using Generative Ai

Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.

The work has been published in the Proceedings of the National Academy of Sciences (PNAS).

When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.

That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.

Wednesday, December 20, 2023

Cosmic lights in the forest

TACC’s Frontera, the fastest academic supercomputer in the US, is a strategic national capability computing system funded by the National Science Foundation.
Photo Credit: TACC.

Like a celestial beacon, distant quasars make the brightest light in the universe. They emit more light than our entire Milky Way galaxy. The light comes from matter ripped apart as it is swallowed by a supermassive black hole. Quasar light reveals clues about the large-scale structure of the universe as it shines through enormous clouds of neutral hydrogen gas formed shortly after the Big Bang on the scale of 20 million light years across or more. 

Using quasar light data, the National Science Foundation (NSF)-funded Frontera supercomputer at the Texas Advanced Computing Center (TACC) helped astronomers develop PRIYA, the largest suite of hydrodynamic simulations yet made for simulating large-scale structure in the universe.

“We’ve created a new simulation model to compare data that exists at the real universe,” said Simeon Bird, an assistant professor in astronomy at the University of California, Riverside. 

Bird and colleagues developed PRIYA, which takes optical light data from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey (SDSS). He and colleagues published their work announcing PRIYA in the Journal of Cosmology and Astroparticle Physics (JCAP). 

Thursday, December 14, 2023

Custom software speeds up, stabilizes high-profile ocean model

The illustration depicts ocean surface currents simulated by MPAS-Ocean.
Illustration Credit: Los Alamos National Laboratory, E3SM, U.S. Dept. of Energy

On the beach, ocean waves provide soothing white noise. But in scientific laboratories, they play a key role in weather forecasting and climate research. Along with the atmosphere, the ocean is typically one of the largest and most computationally demanding components of Earth system models like the Department of Energy’s Energy Exascale Earth System Model, or E3SM.

Most modern ocean models focus on two categories of waves: a barotropic system, which has a fast wave propagation speed, and a baroclinic system, which has a slow wave propagation speed. To help address the challenge of simulating these two modes simultaneously, a team from DOE’s Oak Ridge, Los Alamos and Sandia National Laboratories has developed a new solver algorithm that reduces the total run time of the Model for Prediction Across Scales-Ocean, or MPAS-Ocean, E3SM’s ocean circulation model, by 45%. 

The researchers tested their software on the Summit supercomputer at ORNL’s Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility, and the Compy supercomputer at Pacific Northwest National Laboratory. They ran their primary simulations on the Cori and Perlmutter supercomputers at Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center, and their results were published in the International Journal of High Performance Computing Applications.

Tuesday, October 3, 2023

AI copilot enhances human precision for safer aviation

With Air-Guardian, a computer program can track where a human pilot is looking (using eye-tracking technology), so it can better understand what the pilot is focusing on. This helps the computer make better decisions that are in line with what the pilot is doing or intending to do.
Illustration Credit: Alex Shipps/MIT CSAIL via Midjourney

Imagine you're in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they're always looking out for different things. If they're both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive copilot; a partnership between human and machine, rooted in understanding attention.

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the neural system, it relies on something called "saliency maps," which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems. 

Monday, June 19, 2023

GE Aerospace runs one of the world’s largest supercomputer simulations to test revolutionary new open fan engine architecture

CFM’s RISE open fan engine architecture.
Image Credit: GE Aerospace

To support the development of a revolutionary new open fan engine architecture for the future of flight, GE Aerospace has run simulations using the world’s fastest supercomputer capable of crunching data in excess of exascale speed, or more than a quintillion calculations per second.

To model engine performance and noise levels, GE Aerospace created software capable of operating on Frontier, a recently commissioned supercomputer at the U.S. Department of Energy’s (DOE) Oak Ridge National Laboratory with processing power of about 37,000 GPUs. For comparison, Frontier’s processing speed is so powerful, it would take every person on Earth combined more than four years to do what the supercomputer can in one second.  

By coupling GE Aerospace’s computational fluid dynamics software with Frontier, GE was able to simulate air movement of a full-scale open fan design with incredible detail.

“Developing game-changing new aircraft engines requires game-changing technical capabilities. With supercomputing, GE Aerospace engineers are redefining the future of flight and solving problems that would have previously been impossible,” said Mohamed Ali, vice president and general manager of engineering for GE Aerospace.

Tuesday, June 13, 2023

AI helps show how the brain’s fluids flow

A video shows a perivascular space (area within white lines) into which the researchers injected tiny particles. The particles (shown as moving dots) are trailed by lines which indicate their direction. Having measured the position and velocity of the particles over time, the team then integrated this 2D video with physics-informed neural networks to create an unprecedented high-resolution, 3D look at the brain’s fluid flow system.
Video Credit: Douglas Kelley

New research targets diseases including Alzheimer’s.

A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.

The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo.

A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow. The results are outlined in a study published by Proceedings of the National Academy of Sciences.

Summit study fathoms troubled waters of ocean turbulence

Simulations performed on Oak Ridge National Laboratory’s Summit supercomputer generated one of the most detailed portraits to date of how turbulence disperses heat through ocean water under realistic conditions.
Image Credit: Miles Couchman

Simulations performed on the Summit supercomputer at the Department of Energy’s Oak Ridge National Laboratory revealed new insights into the role of turbulence in mixing fluids and could open new possibilities for projecting climate change and studying fluid dynamics.

The study, published in the Journal of Turbulence, used Summit to model the dynamics of a roughly 10-meter section of ocean. That study generated one of the most detailed simulations to date of how turbulence disperses heat through seawater under realistic conditions. The lessons learned can apply to other substances, such as pollution spreading through water or air.

“We’ve never been able to do this type of analysis before, partly because we couldn’t get samples at the necessary size,” said Miles Couchman, co-author and a postdoc at the University of Cambridge. “We needed a machine-like Summit that could allow us to observe these details across the vast range of relevant scales.”

Monday, June 5, 2023

Computational model mimics humans’ ability to predict emotions

While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, says MIT Professor Rebecca Saxe. Much more important is the ability to predict someone’s emotional response to events before they occur.
Image Credit: Christine Daniloff, MIT
(CC BY-NC-ND 3.0)

When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.

MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them. 

To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.

“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Friday, April 14, 2023

Location intelligence shines a light on disinformation

Each dot represents a Twitterer discussing COVID-19 from April 16 to April 22, 2021. The closer the dots are to the center, the greater the influence. The brighter the color, the stronger the intent.
Image Credit: ORNL

Using disinformation to create political instability and battlefield confusion dates back millennia.

However, today’s disinformation actors use social media to amplify disinformation that users knowingly or, more often, unknowingly perpetuate. Such disinformation spreads quickly, threatening public health and safety. Indeed, the COVID-19 pandemic and recent global elections have given the world a front-row seat to this form of modern warfare.

A group at ORNL now studies such threats thanks to the evolution at the lab of location intelligence, or research that uses open data to understand places and the factors that influence human activity in them. In the past, location intelligence has informed emergency response, urban planning, transportation planning, energy conservation and policy decisions. Now, location intelligence at ORNL also helps identify disinformation, or shared information that is intentionally misleading, and its impacts.

Wednesday, April 12, 2023

ORNL, NOAA launch new supercomputer for climate science research

Photo Credit: Genevieve Martin/ORNL

Oak Ridge National Laboratory, in partnership with the National Oceanic and Atmospheric Administration, is launching a new supercomputer dedicated to climate science research. The new system is the fifth supercomputer to be installed and run by the National Climate-Computing Research Center at ORNL.

The NCRC was established in 2009 as part of a strategic partnership between NOAA and the U.S. Department of Energy and is responsible for the procurement, installation, testing and operation of several supercomputers dedicated to climate modeling and simulations. The goal of the partnership is to increase NOAA’s climate modeling capabilities to further critical climate research. To that end, the NCRC has installed a series of increasingly powerful computers since 2010, each of them formally named Gaea. The latest system, also referred to as C5, is an HPE Cray machine with over 10 petaflops — or 10 million billion calculations per second — of peak theoretical performance — almost double the power of the two previous systems combined.

Monday, February 27, 2023

Hackers could try to take over a military aircraft; can a cyber shuffle stop them?

Sandia National Laboratories cybersecurity expert Chris Jenkins sits in front of a whiteboard with the original sketch of the moving target defense idea for which he is the team lead. When the COVID-19 pandemic hit, Jenkins began working from home, and his office whiteboard remained virtually undisturbed for more than two years.
Photo Credit: Craig Fritz

A cybersecurity technique that shuffles network addresses like a blackjack dealer shuffles playing cards could effectively befuddle hackers gambling for control of a military jet, commercial airliner or spacecraft, according to new research. However, the research also shows these defenses must be designed to counter increasingly sophisticated algorithms used to break them.

Many aircraft, spacecraft and weapons systems have an onboard computer network known as military standard 1553, commonly referred to as MIL-STD-1553, or even just 1553. The network is a tried-and-true protocol for letting systems like radar, flight controls and the heads-up display talk to each other.

Securing these networks against a cyberattack is a national security imperative, said Chris Jenkins, a Sandia National Laboratories cybersecurity scientist. If a hacker were to take over 1553 midflight, he said, the pilot could lose control of critical aircraft systems, and the impact could be devastating.

Jenkins is not alone in his concerns. Many researchers across the country are designing defenses for systems that utilize the MIL-STD-1553 protocol for command and control. Recently, Jenkins and his team at Sandia partnered with researchers at Purdue University in West Lafayette, Indiana, to test an idea that could secure these critical networks.

Monday, February 13, 2023

Efficient technique improves machine-learning models’ reliability

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new technique that can enable a machine-learning model to quantify how confident it is in its predictions, but does not require vast troves of new data and is much less computationally intensive than other techniques.
Image Credit: MIT News, iStock
Creative Commons Attribution Non-Commercial No Derivatives license

Powerful machine-learning models are being used to help people tackle tough problems such as identifying disease in medical images or detecting road obstacles for autonomous vehicles. But machine-learning models can make mistakes, so in high-stakes settings it’s critical that humans know when to trust a model’s predictions.

Uncertainty quantification is one tool that improves a model’s reliability; the model produces a score along with the prediction that expresses a confidence level that the prediction is correct. While uncertainty quantification can be useful, existing methods typically require retraining the entire model to give it that ability. Training involves showing a model millions of examples so it can learn a task. Retraining then requires millions of new data inputs, which can be expensive and difficult to obtain, and also uses huge amounts of computing resources.

Researchers at MIT and the MIT-IBM Watson AI Lab have now developed a technique that enables a model to perform more effective uncertainty quantification, while using far fewer computing resources than other methods, and no additional data. Their technique, which does not require a user to retrain or modify a model, is flexible enough for many applications.

Wednesday, February 1, 2023

Learning with all your senses: Multimodal enrichment as the optimal learning strategy of the future

Illustration Credit: John Hain

Neuroscientist Katharina von Kriegstein from Technische Universität Dresden and Brian Mathias from the University of Aberdeen have compiled extensive interdisciplinary findings from neuroscience, psychology, computer modelling and education on the topic of "learning" in a recent review article in the journal Trends in Cognitive Sciences. The results of the interdisciplinary review reveal the mechanisms the brain uses to achieve improved learning outcome by combining multiple senses or movements in learning. This kind of learning outcome applies to a wide variety of domains, such as letter and vocabulary acquisition, reading, mathematics, music, and spatial orientation.

Many educational approaches assume that integrating complementary sensory and motor information into the learning experience can enhance learning, for example gestures help in learning new vocabulary in foreign language classes. In her recent publication, neuroscientist Katharina von Kriegstein from Technische Universität Dresden and Brian Mathias of the University of Aberdeen summarize these methods under the term "multimodal enrichment." This means enrichment with multiple senses and movement. Numerous current scientific studies prove that multimodal enrichment can enhance learning outcomes. Experiments in classrooms show similar results.

Tuesday, January 31, 2023

A fresh look at restoring power to the grid

Sandia National Laboratories computer scientists Casey Doyle, left, and Kevin Stamber stand in front of an electrical switching station. Their team has developed a computer model to determine the best way to restore power to a grid after a disruption, such as a complete blackout caused by extreme weather.
Photo Credit: Craig Fritz

Climate change can alter extreme weather events, and these events have the potential to strain, disrupt or damage the nation’s grid.

Sandia National Laboratories computer scientists have been working on an innovative computer model to help grid operators quickly restore power to the grid after a complete disruption, a process called “black start.”

Their model combines a restoration-optimization model with a computer model of how grid operators would make decisions when they don’t have complete knowledge of every generator and distribution line. The model also includes a physics-based understanding of how the individual power generators, distribution substations and power lines would react during the process of restoring power to the grid.

“We’ve spent a lot of time thinking about how we go beyond simply looking at this as a multi-layered optimization problem,” said project lead Kevin Stamber. “When we start to discuss disruptions to the electric grid, being able to act on the available information and provide a response is critical. The operator still has to work that restoration solution against the grid and see whether or not they are getting the types of reactions from the system that they expect to see.”

The overarching model also can simulate black starts triggered by human-caused disruptions such as a successful cyberattack.

Thursday, December 15, 2022

Greenland’s Glaciers Might Be Melting 100 Times As Fast As Previously Thought

A melting glacier on the coast of Greenland.
Photo Credit: Dr. Lorenz Meire, Greenland Climate Research Center.

A computer model has been created by researchers at the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin that determines the rate at which Greenland’s glacier fronts are melting.

Published in the journal Geophysical Research Letters, the model is the first designed specifically for vertical glacier fronts – where ice meets the ocean at a sharp angle. It reflects recent observations of an Alaskan glacier front melting up to 100 times as fast as previously assumed. According to the researchers, the model can be used to improve both ocean and ice sheet models, which are crucial elements of any global climate model.

“Up to now, glacier front melt models have been based on results from the Antarctic, where the system is quite different,” said lead author Kirstin Schulz, a research associate in the Oden Institute’s Computational Research in Ice and Ocean Systems Group (CRIOS). “By using our model in an ocean or climate model, we can get a much better idea of how vertical glacier fronts are melting.”

Tuesday, December 13, 2022

AI model predicts if a covid-19 test might be positive or not

Xingquan “Hill” Zhu, Ph.D., (left) senior author and a professor; and co-author Magdalyn E. Elkin, a Ph.D. student, both in FAU’s Department of Electrical Engineering and Computer Science.
Photo Credit: Florida Atlantic University

COVID-19 and its latest Omicron strains continue to cause infections across the country as well as globally. Serology (blood) and molecular tests are the two most commonly used methods for rapid COVID-19 testing. Because COVID-19 tests use different mechanisms, they vary significantly. Molecular tests measure the presence of viral SARS-CoV-2 RNA while serology tests detect the presence of antibodies triggered by the SARS-CoV-2 virus.

Currently, there is no existing study on the correlation between serology and molecular tests and which COVID-19 symptoms play a key role in producing a positive test result. A study from Florida Atlantic University ’s College of Engineering and Computer Science using machine learning provides important new evidence in understanding how molecular tests versus serology tests are correlated, and what features are the most useful in distinguishing between COVID-19 positive versus test outcomes.

Researchers from the College of Engineering and Computer Science trained five classification algorithms to predict COVID-19 test results. They created an accurate predictive model using easy-to-obtain symptom features, along with demographic features such as number of days post-symptom onset, fever, temperature, age and gender.

Monday, December 12, 2022

Sandia, Intel seek novel memory tech to support stockpile mission

Developed at Sandia National Laboratories, a high-fidelity simulation of the hypersonic turbulent flow over a notional hypersonic flight vehicle, colored grey, depicts the speed of the air surrounding the body, with red as high and blue as low. The turbulent motions that impose harsh, unsteady loading on the vehicle body are depicted in the back portion of the vehicle. Accurately predicting these loads are critical to vehicle survivability, and for practical applications, billions of degrees of freedom are required to predict physics of interest, inevitably requiring massive computing capabilities for realistic turnaround times. The work conducted as part of this research and development contract targets improving memory performance characteristics that can greatly benefit this and other mission applications.
Simulation Credit: Cory Stack

In pursuit of novel advanced memory technologies that would accelerate simulation and computing applications in support of the nation’s stockpile stewardship mission, Sandia National Laboratories, in partnership with Los Alamos and Lawrence Livermore national labs, has announced a research and development contract awarded to Intel Federal LLC, a wholly owned subsidiary of Intel Corporation.

Funded by the National Nuclear Security Administration’s Advanced Simulation and Computing program, the three national labs will collaborate with Intel Federal LLC on the project.

“ASC’s Advanced Memory Technology research projects are developing technologies that will impact future computer system architectures for complex modeling and simulation workloads,” said ASC program director Thuc Hoang. “We have selected several technologies that have the potential to deliver more than 40 times the application performance of our forthcoming NNSA Exascale systems.”

Sandia project lead James H. Laros III, a Distinguished Member of Technical Staff, said “this effort will focus on improving bandwidth and latency characteristics of future memory systems, which should have a direct impact on application performance for a wide range of ASC mission codes.”

Thursday, December 1, 2022

New Stanford chip-scale laser isolator could transform photonics

From left, Alexander White, Geun Ho Ahn, and Jelena Vučković with the nanoscale isolator.
Photo Credit: Hannah Kleidermacher

Using well-known materials and manufacturing processes, researchers have built an effective, passive, ultrathin laser isolator that opens new research avenues in photonics.

Lasers are transformational devices, but one technical challenge prevents them from being even more so. The light they emit can reflect back into the laser itself and destabilize or even disable it. At real-world scales, this challenge is solved by bulky devices that use magnetism to block harmful reflections. At chip scale, however, where engineers hope lasers will one day transform computer circuitry, effective isolators have proved elusive.

Against that backdrop, researchers at Stanford University say they have created a simple and effective chip-scale isolator that can be laid down in a layer of semiconductor-based material hundreds of times thinner than a sheet of paper.

“Chip-scale isolation is one of the great open challenges in photonics,” said Jelena Vučković, a professor of electrical engineering at Stanford and senior author of the study appearing Dec. 1 in the journal Nature Photonics.

“Every laser needs an isolator to stop back reflections from coming into and destabilizing the laser,” said Alexander White, a doctoral candidate in Vučković’s lab and co-first author of the paper, adding that the device has implications for everyday computing, but could also influence next-generation technologies, like quantum computing.

Tuesday, November 29, 2022

Breaking the scaling limits of analog computing

MIT researchers have developed a technique that greatly reduces the error in an optical neural network, which uses light to process data instead of electrical signals. With their technique, the larger an optical neural network becomes, the lower the error in its computations. This could enable them to scale these devices up so they would be large enough for commercial uses.
Credit: SFLORG stock photo

As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.

An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.

However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.

Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.

MIT researchers have overcome this hurdle and found a way to effectively scale an optical neural network. By adding a tiny hardware component to the optical switches that form the network’s architecture, they can reduce even the uncorrectable errors that would otherwise accumulate in the device.

Featured Article

Two artificial intelligences talk to each other

A UNIGE team has developed an AI capable of learning a task solely on the basis of verbal instructions. And to do the same with a «sister» A...

Top Viewed Articles