. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, September 27, 2022

Neurodegenerative disease can progress in newly identified patterns


Neurodegenerative diseases — like amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease), Alzheimer’s, and Parkinson’s — are complicated, chronic ailments that can present with a variety of symptoms, worsen at different rates, and have many underlying genetic and environmental causes, some of which are unknown. ALS, in particular, affects voluntary muscle movement and is always fatal, but while most people survive for only a few years after diagnosis, others live with the disease for decades. Manifestations of ALS can also vary significantly; often slower disease development correlates with onset in the limbs and affecting fine motor skills, while the more serious, bulbar ALS impacts swallowing, speaking, breathing, and mobility. Therefore, understanding the progression of diseases like ALS is critical to enrollment in clinical trials, analysis of potential interventions, and discovery of root causes.

However, assessing disease evolution is far from straightforward. Current clinical studies typically assume that health declines on a downward linear trajectory on a symptom rating scale, and use these linear models to evaluate whether drugs are slowing disease progression. However, data indicate that ALS often follows nonlinear trajectories, with periods where symptoms are stable alternating with periods when they are rapidly changing. Since data can be sparse, and health assessments often rely on subjective rating metrics measured at uneven time intervals, comparisons across patient populations are difficult. These heterogenous data and progression, in turn, complicate analyses of invention effectiveness and potentially mask disease origin.

Thursday, September 22, 2022

Conventional Computers Can Learn to Solve Tricky Quantum Problems

Hsin-Yuan (Robert) Huang
Credit: Caltech

There has been a lot of buzz about quantum computers and for good reason. The futuristic computers are designed to mimic what happens in nature at microscopic scales, which means they have the power to better understand the quantum realm and speed up the discovery of new materials, including pharmaceuticals, environmentally friendly chemicals, and more. However, experts say viable quantum computers are still a decade away or more. What are researchers to do in the meantime?

A new Caltech-led study in the journal Science describes how machine learning tools, run on classical computers, can be used to make predictions about quantum systems and thus help researchers solve some of the trickiest physics and chemistry problems. While this notion has been proposed before, the new report is the first to mathematically prove that the method works in problems that no traditional algorithms could solve.

"Quantum computers are ideal for many types of physics and materials science problems," says lead author Hsin-Yuan (Robert) Huang, a graduate student working with John Preskill, the Richard P. Feynman Professor of Theoretical Physics and the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology (IQIM). "But we aren't quite there yet and have been surprised to learn that classical machine learning methods can be used in the meantime. Ultimately, this paper is about showing what humans can learn about the physical world."

Wildfire smoke is unraveling decades of air quality gains

Over the last decade, PM2.5 from wildfire smoke has increased in much of the U.S., particularly in Western states, but some areas in the South and East have seen modest declines. This map shows the decadal change in smoke PM2.5, meaning the difference in daily average smoke PM2.5 during 2006−2010 compared to 2016−2020.
Image credit: Childs et al. 2022, Environmental Science & Technology

Stanford researchers have developed an AI model for predicting dangerous particle pollution to help track the American West’s rapidly worsening wildfire smoke. The detailed results show millions of Americans are routinely exposed to pollution at levels rarely seen just a decade ago.

Wildfire smoke now exposes millions of Americans each year to dangerous levels of fine particulate matter, lofting enough soot across parts of the West in recent years to erase much of the air quality gains made over the last two decades.

Those are among the findings of a new Stanford University study published Sept. 22 in Environmental Science & Technology that focuses on a type of particle pollution known as PM2.5, which can lodge deep in our lungs and even get into our bloodstream.

Using statistical modeling and artificial intelligence techniques, the researchers estimated concentrations of PM2.5 specifically from wildfire smoke in sharp enough detail to reveal variations within individual counties and individual smoke events from coast to coast from 2006 to 2020.

“We found that people are being exposed to more days with wildfire smoke and more extreme days with high levels of fine particulate matter from smoke,” said lead study author Marissa Childs, who worked on the research as a PhD student in Stanford’s Emmett Interdisciplinary Program in Environment and Resources (E-IPER). Unlike other major pollutant sources, wildfire smoke is considered an “exceptional event” under the Clean Air Act, she explained, “which means an increasing portion of the particulate matter that people are exposed to is unregulated.”

Wednesday, September 21, 2022

Shutting down backup genes leads to cancer remission in mice

Abhinav Achreja, PhD, Research Fellow at the University of Michigan Biomedical Engineering and Deepak Nagrath, Ph.D. Associate Professor of Biomedical Engineering works on ovarian cancer cell research in the bio-engineering lab at the North Campus Research Center (NCRC).
Image credit: Marcin Szczepanski, Michigan Engineering

The way that tumor cells enable their uncontrolled growth is also a weakness that can be harnessed to treat cancer, researchers at the University of Michigan and Indiana University have shown.

Their machine-learning algorithm can identify backup genes that only tumor cells are using so that drugs can target cancer precisely.

“Most cancer drugs affect normal tissues and cells. However, our strategy allows specific targeting of cancer cells.”
Deepak Nagrath

The team demonstrated this new precision medicine approach for treating ovarian cancer in mice. Moreover, the cellular behavior that exposes these vulnerabilities is common across most forms of cancer, meaning the algorithms could provide better treatment plans for a host of malignancies.

“This could revolutionize the precision medicine field because the drug targeting will only affect and kill cancer cells and spare the normal cells,” said Deepak Nagrath, a U-M associate professor of biomedical engineering and senior author of the study in Nature Metabolism. “Most cancer drugs affect normal tissues and cells. However, our strategy allows specific targeting of cancer cells.”

Saturday, September 17, 2022

Even the smartest AI models don’t match human visual processing

The study employed novel visual stimuli called “Frankensteins
Source/Credit: York University

Deep convolutional neural networks (DCNNs) don’t see objects the way humans do – using configural shape perception – and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study.

Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York.

The study employed novel visual stimuli called “Frankensteins” to explore how the human brain and DCNNs process holistic, configural object properties.

“Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”

The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not – revealing an insensitivity to configural object properties.

Tuesday, September 13, 2022

New method for comparing neural networks exposes how artificial intelligence works

Researchers at Los Alamos are looking at new ways to compare neural networks. This image was created with an artificial intelligence software called Stable Diffusion, using the prompt “Peeking into the black box of neural networks.”
Source: Los Alamos National Laboratory

A team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks within the “black box” of artificial intelligence to help researchers understand neural network behavior. Neural networks recognize patterns in datasets; they are used everywhere in society, in applications such as virtual assistants, facial recognition systems and self-driving cars.

“The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

Jones is the lead author of the paper “If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness,” which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks.

Friday, September 9, 2022

New AI system predicts how to prevent wildfires

Satellite image of Borneo in 2006 covered by smoke from fires (marked by red dots).
Image Credit: Jeff Schmaltz, MODIS Rapid Response Team / NASA

A machine learning model can evaluate the effectiveness of different management strategies

Wildfires are a growing threat in a world shaped by climate change. Now, researchers at Aalto University have developed a neural network model that can accurately predict the occurrence of fires in peatlands. They used the new model to assess the effect of different strategies for managing fire risk and identified a suite of interventions that would reduce fire incidence by 50-76%.

The study focused on the Central Kalimantan province of Borneo in Indonesia, which has the highest density of peatland fires in Southeast Asia. Drainage to support agriculture or residential expansion has made peatlands increasingly vulnerable to recurring fires. In addition to threatening lives and livelihoods, peatland fires release significant amounts of carbon dioxide. However, prevention strategies have faced difficulties because of the lack of clear, quantified links between proposed interventions and fire risk.

The new model uses measurements taken before each fire season in 2002-2019 to predict the distribution of peatland fires. While the findings can be broadly applied to peatlands elsewhere, a new analysis would have to be done for other contexts. ‘Our methodology could be used for other contexts, but this specific model would have to be re-trained on the new data,’ says Alexander Horton, the postdoctoral researcher who carried out study.

Tuesday, September 6, 2022

Artificial intelligence against child cancer

Stefan Posch
Photo Credit: Uni Halle / Markus Scholz

"Artificial Professor" is the nickname for a new research project at the University Hospital Leipzig and at the Martin Luther University Halle-Wittenberg (MLU). A team of doctors and bioinformatics wants to use self-learning software to significantly improve the therapy of lymphatic cancer (Hodgkin's lymphoma) in children. The second phase of the multi-year project recently began with 40,000 euros in funding from the Mitteldeutsche Kinderkrebsforschung Foundation.

Children affected by lymph gland cancer can now be cured with modern treatment methods such as chemotherapy and radiation in 95 percent of cases. However, intensive treatment in childhood often leads to late damage. Irradiation in particular increases the risk of developing a second cancer later. Long-term studies show massive over-mortality due to second diseases, such as cancer or heart diseases in adulthood.

Therefore, the primary goal of the medical profession is: only as little treatment as necessary. The data analysis developed in the project, based on artificial intelligence, is intended to help and optimize therapy for each individual patient. In the first phase, the researchers first prepared and prepared a unique data set for the big data analysis: a network of 270 child cancer clinics from 21 countries sent the data from the imaging PET examinations anonymously to Leipzig for years. The three-dimensional image series shows how well individual therapies work and how the tumor tissue develops over time.

Monday, September 5, 2022

A Novel Approach to Creating Tailored Odors and Fragrances Using Machine Learning


Can we use machine learning methods to predict the sensing data of odor mixtures and design new smells? A new study by researchers from Tokyo Tech does just that. The novel method is bound to have applications in the food, health, beauty, and wellness industries, where odors and fragrances are of keen interest.

The sense of smell is one of the basic senses of animal species. It is critical to finding food, realizing attraction, and sensing danger. Humans detect smells, or odorants, with olfactory receptors expressed in olfactory nerve cells. These olfactory impressions of odorants on nerve cells are associated with their molecular features and physicochemical properties. This makes it possible to tailor odors to create an intended odor impression. Current methods only predict olfactory impressions from the physicochemical features of odorants. But that method cannot predict the sensing data, which is indispensable for creating smells.

To tackle this issue, scientists from Tokyo Institute of Technology (Tokyo Tech) have employed the innovative strategy of solving the inverse problem. Instead of predicting the smell from molecular data, this method predicts molecular features based on the odor impression. This is achieved using standard mass spectrum data and machine learning (ML) models. "We used a machine-learning-based odor predictive model that we had previously developed to obtain the odor impression. Then we predicted the mass spectrum from odor impression inversely based on the previously developed forward model," explains Professor Takamichi Nakamoto, the leader of the research effort by Tokyo Tech. The findings have been published in PLoS One.

Friday, September 2, 2022

How Artificial Intelligence can explain its decisions

They have brought together the seemingly incompatible inductive approach of machine learning with deductive logic: Stephanie Schörner, Axel Mosig and David Schuhmacher (from left).
Credit: RUB, Marquard

If an algorithm in a tissue sample makes up a tumor, it does not yet reveal how it came to this result. It is not very trustworthy. Bochum researchers are therefore taking a new approach.

Artificial intelligence (AI) can be trained to recognize whether a tissue image contains tumor. How she makes her decision has so far remained hidden. A team from the Research Center for Protein Diagnostics, or PRODI for short, at the Ruhr University Bochum is developing a new approach: with it, the decision of an AI can be explained and thus trustworthy. The researchers around Prof. Dr. Axel Mosig in the journal "Medical Image Analysis".

Bioinformatician Axel Mosig cooperated with Prof. Dr. Andrea Tannapfel, head of the Institute of Pathology, the oncologist Prof. Dr. Anke Reinacher-Schick from St. Josef Hospital of the Ruhr University as well as the biophysicist and PRODI founding director Prof. Dr. Klaus Gerwert. The group developed a neural network, i.e. an AI that can classify whether a tissue sample contains tumor or not. To do this, they fed the AI with many microscopic tissue images, some of which contained tumors, others were tumor-free.

Thursday, September 1, 2022

New methodology predicts coronavirus and other infectious disease threats to wildlife

The rate that emerging wildlife diseases infect humans has steadily increased over the last three decades. Viruses, such as the global coronavirus pandemic and recent monkeypox outbreak, have heightened the urgent need for disease ecology tools to forecast when and where disease outbreaks are likely. A University of South Florida assistant professor helped develop a methodology that will do just that – predict disease transmission from wildlife to humans, from one wildlife species to another and determine who is at risk of infection.

The methodology is a machine-learning approach that identifies the influence of variables, such as location and climate, on known pathogens. Using only small amounts of information, the system is able to identify community hot spots at risk of infection on both global and local scales.

“Our main goal is to develop this tool for preventive measures,” said co-principal investigator Diego Santiago-Alarcon, assistant professor of integrative biology. “It’s difficult to have an all-purpose methodology that can be used to predict infections across all the diverse parasite systems, but with this research, we contribute to achieving that goal.”

With help from researchers at the Universiad Veracruzana and Instituto de Ecologia, located in Mexico, Santiago-Alarcon examined three host-pathogen systems – avian malaria, birds with West Nile virus and bats with coronavirus – to test the reliability and accuracy of the models generated by the methodology.

Soaking up the sun with artificial intelligence

Machine learning methods are being developed at Argonne to advance solar energy research with perovskites.
Credit: Maria Chan/ Argonne National Laboratory

The sun continuously transmits trillions of watts of energy to the Earth. It will be doing so for billions more years. Yet, we have only just begun tapping into that abundant, renewable source of energy at affordable cost.

Solar absorbers are a material used to convert this energy into heat or electricity. Maria Chan, a scientist in the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has developed a machine learning method for screening many thousands of compounds as solar absorbers. Her co-author on this project was Arun Mannodi-Kanakkithodi, a former Argonne postdoc who is now an assistant professor at Purdue University.

“We are truly in a new era of applying AI and high-performance computing to materials discovery.” — Maria Chan, scientist, Center for Nanoscale Materials

“According to a recent DOE study, by 2035, solar energy could power 40% of the nation’s electricity,” said Chan. ​“And it could help with decarbonizing the grid and provide many new jobs.”

Tuesday, August 23, 2022

Machine learning algorithm predicts how to get the most out of electric vehicle batteries

Credit: (Joenomias) Menno de Jong from Pixabay 

The researchers, from the University of Cambridge, say their algorithm could help drivers, manufacturers and businesses get the most out of the batteries that power electric vehicles by suggesting routes and driving patterns that minimize battery degradation and charging times.

The team developed a non-invasive way to probe batteries and get a holistic view of battery health. These results were then fed into a machine learning algorithm that can predict how different driving patterns will affect the future health of the battery.

"This method could unlock value in so many parts of the supply chain, whether you’re a manufacturer, an end user, or a recycler, because it allows us to capture the health of the battery beyond a single number"
Alpha Lee

If developed commercially, the algorithm could be used to recommend routes that get drivers from point to point in the shortest time without degrading the battery, for example, or recommend the fastest way to charge the battery without causing it to degrade. The results are reported in the journal Nature Communications.

The health of a battery, whether it’s in a smartphone or a car, is far more complex than a single number on a screen. “Battery health, like human health, is a multi-dimensional thing, and it can degrade in lots of different ways,” said first author Penelope Jones, from Cambridge’s Cavendish Laboratory. “Most methods of monitoring battery health assume that a battery is always used in the same way. But that’s not how we use batteries in real life. If I’m streaming a TV show on my phone, it’s going to run down the battery a whole lot faster than if I’m using it for messaging. It’s the same with electric cars – how you drive will affect how the battery degrades.”

Researchers develop the first AI-based method for dating archaeological remains

Credit: Unsplash

By analyzing DNA with the help of artificial intelligence (AI), an international research team led by Lund University in Sweden has developed a method that can accurately date up to ten-thousand-year-old human remains.

Accurately dating ancient humans is key when mapping how people migrated during world history.

The standard dating method since the 1950s has been radiocarbon dating. The method, which is based on the ratio between two different carbon isotopes, has revolutionized archaeology. However, technology is not always completely reliable in terms of accuracy, making it complicated to map ancient people, how they moved and how they are related.

In a new study published in Cell Reports Methods, a research team has developed a dating method that could be of great interest to archaeologists and paleognomicists.

“Unreliable dating is a major problem, resulting in vague and contradictory results. Our method uses artificial intelligence to date genomes via their DNA with great accuracy, says Eran Elhaik, researcher in molecular cell biology at Lund University.

Thursday, August 18, 2022

A new neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today’s compute platforms

 The NeuRRAM chip is an innovative neuromorphic chip
Credit: David Baillot/University of California San Diego

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

Wednesday, August 17, 2022

Machine learning meets medicine in the fight against superbugs

Scanning electron micrograph of a human neutrophil ingesting MRSA.
Image: National Institute of Allergy and Infectious Diseases, National Institutes of Health on Flickr, CC BY-NC 2.0

MRSA is an antibiotic-resistant staph infection that can be deadly for those in hospital care or with weakened immune systems. Staphylococcus aureus bacteria live in the nose without necessarily producing any symptoms but can also spread to other parts of the body, leading to persistent infections. Management of MRSA is long-term and laborious, so any steps to optimize treatments and reduce re-infections will benefit patients. This new research can predict how effective different treatments will be by combining patient data with estimates of how MRSA moves between different parts of the body. The study was published in the Journal of the Royal Society Interface.

The researchers compared data from 2000 patients with MRSA after hospital visits. In one group, patients were given standard information about how to treat MRSA and prevent its spread. The second group followed a more intensive ‘decolonization’ protocol to eliminate MRSA through wound disinfection, cleaning the armpits and groin, and using nasal spray. Both groups were tested for MRSA on different body parts at various time points over nine months.

The current state-of-the-art in medical research often involves comparing two groups in this way, to see if an intervention or treatment could be effective. The new study added another element: a mathematical model that looked at the interactions between treatments and body parts. 'The model shows how MRSA moves between body parts,' says senior author Pekka Marttinen, professor at Aalto University and the Finnish Center for Artificial Intelligence FCAI. 'It can help us optimize the combination of treatments and even predict how new treatments would work before they have been tested on patients.'

Friday, August 12, 2022

Two Monumental Milestones Achieved in CT Imaging

Conventional chest CT image (left side) of the human airways compared to the new and improved PCD-CT system (right side). The image produced with the PCD-CT system shows better delineation of the bronchial walls. Preliminary studies showed that the PCD-CT system allowed radiologists to see smaller airways than with standard CT systems.
Image credit: Cynthia McCollough, Mayo Clinic, Rochester, Minnesota.

Two biomedical imaging technologies developed with support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) have been cleared for clinical use by the Food and Drug Administration (FDA). Both technologies offer advances in computed tomography (CT).

In one of these developments, project lead Cynthia McCollough, Ph.D., director of Mayo Clinic’s CT Clinical Innovation Center and her team helped develop the first photon-counting detector (PCD)-CT system, which is superior to current CT technology. CT imaging has been an immense clinical asset for diagnosing many diseases and injuries. However, since its introduction into the clinic in 1971, the way that the CT detector converts x-rays to electrical signals has remained essentially the same. Photon-counting detectors operate using a fundamentally different mechanism than any prior CT detector ever has.

“This is the first major imaging advancement cleared by the FDA for CT in a decade,” stated Behrouz Shabestari, Ph.D., director of the division of Health Informatics Technologies. “The impact of this development will be far-reaching and provide clinicians with more detailed information for medical diagnoses.”

A CT scan is obtained when an x-ray beam rotates around a patient, allowing x-rays to pass through the patient. As the x-rays leave the patient a picture is taken by a detector and the information is transmitted to a computer for further processing. “Standard CT detectors use a two-step process, where x-rays are turned into light and then light is converted to an electrical signal,” explained Cynthia McCollough. “The photon-counting detector uses a one-step process where the x-ray is immediately transformed into an electrical signal.”

AI could help patients with chronic pain avoid opioids

Image by Andrea from Pixabay
Cognitive behavioral therapy is an effective alternative to opioid painkillers for managing chronic pain. But getting patients to complete those programs is challenging, especially because psychotherapy often requires multiple sessions and mental health specialists are scarce.

A new study suggests that pain CBT supported by artificial intelligence renders the same results as guideline-recommended programs delivered by therapists, while requiring substantially less clinician time, making this therapy more accessible.

“Chronic pain is incredibly common: back pain, osteoarthritis, migraine headaches and more. Because of pain, people miss work, develop depression, some people drink more alcohol than is healthy, and chronic pain is one of the main drivers of the opioid epidemic,” said John Piette, a professor at the University of Michigan’s School of Public Health and senior research scientist at the Veterans Administration.

“We’re very excited about the results of this study, because we were able to demonstrate that we can achieve pain outcomes that are at least as good as standard cognitive behavioral therapy programs, and maybe even better. And we did that with less than half the therapist time as guideline-recommended approaches.”

Traditionally, CBT is delivered by a therapist in 6 to 12 weekly in-person sessions that target patients’ behaviors, help them cope mentally and assist them in regaining functioning.

Wednesday, August 10, 2022

AI May Come to the Rescue of Future Firefighters

A view from NIST's Burn Observation Bubble (BOB) of a burning structure during an experiment, one minute before flashover. 
Credit: NIST

In firefighting, the worst flames are the ones you don’t see coming. Amid the chaos of a burning building, it is difficult to notice the signs of impending flashover — a deadly fire phenomenon wherein nearly all combustible items in a room ignite suddenly. Flashover is one of the leading causes of firefighter deaths, but new research suggests that artificial intelligence (AI) could provide first responders with a much-needed heads-up.

Researchers at the National Institute of Standards and Technology (NIST), the Hong Kong Polytechnic University and other institutions have developed a Flashover Prediction Neural Network (FlashNet) model to forecast the lethal events precious seconds before they erupt. In a new study published in Engineering Applications of Artificial Intelligence, FlashNet boasted an accuracy of up to 92.1% across more than a dozen common residential floorplans in the U.S. and came out on top when going head-to-head with other AI-based flashover predicting programs.

Flashovers tend to suddenly flare up at approximately 600 degrees Celsius (1,100 degrees Fahrenheit) and can then cause temperatures to shoot up further. To anticipate these events, existing research tools either rely on constant streams of temperature data from burning buildings or use machine learning to fill in the missing data in the likely event that heat detectors succumb to high temperatures.

Until now, most machine learning-based prediction tools, including one the authors previously developed, have been trained to operate in a single, familiar environment. In reality, firefighters are not afforded such luxury. As they charge into hostile territory, they may know little to nothing about the floorplan, the location of fire or whether doors are open or closed.

Tuesday, August 9, 2022

How water turns into ice — with quantum accuracy

Researchers at Princeton University combined artificial intelligence and quantum mechanics to simulate what happens at the molecular level when water freezes. The result is the most complete simulation yet of the first steps in ice “nucleation,” a process important for climate and weather modeling.  
Video by Pablo Piaggi, Princeton University

A team based at Princeton University has accurately simulated the initial steps of ice formation by applying artificial intelligence (AI) to solving equations that govern the quantum behavior of individual atoms and molecules.

The resulting simulation describes how water molecules transition into solid ice with quantum accuracy. This level of accuracy, once thought unreachable due to the amount of computing power it would require, became possible when the researchers incorporated deep neural networks, a form of artificial intelligence, into their methods. The study was published in the journal Proceedings of the National Academy of Sciences.

“In a sense, this is like a dream come true,” said Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago. “Our hope then was that eventually we would be able to study systems like this one, but it was not possible without further conceptual development, and that development came via a completely different field, that of artificial intelligence and data science.”

The ability to model the initial steps in freezing water, a process called ice nucleation, could improve accuracy of weather and climate modeling as well as other processes like flash-freezing food.

The new approach enables the researchers to track the activity of hundreds of thousands of atoms over time periods that are thousands of times longer, albeit still just fractions of a second, than in early studies.

Car co-invented the approach to using underlying quantum mechanical laws to predict the physical movements of atoms and molecules. Quantum mechanical laws dictate how atoms bind to each other to form molecules, and how molecules join with each other to form everyday objects.

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles